Test Report: QEMU_macOS 18424

                    
                      1ff1985e433cf64121c1d5b23135320107f58df6:2024-10-07:36542
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 39.65
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 9.97
27 TestAddons/Setup 9.96
28 TestCertOptions 10.16
29 TestCertExpiration 195.19
30 TestDockerFlags 10.13
31 TestForceSystemdFlag 10.15
32 TestForceSystemdEnv 10.47
38 TestErrorSpam/setup 9.78
47 TestFunctional/serial/StartWithProxy 9.87
49 TestFunctional/serial/SoftStart 5.27
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.05
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.18
61 TestFunctional/serial/MinikubeKubectlCmd 0.78
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.25
63 TestFunctional/serial/ExtraConfig 5.26
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.08
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.21
73 TestFunctional/parallel/StatusCmd 0.14
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.03
81 TestFunctional/parallel/SSHCmd 0.14
82 TestFunctional/parallel/CpCmd 0.29
84 TestFunctional/parallel/FileSync 0.09
85 TestFunctional/parallel/CertSync 0.34
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
95 TestFunctional/parallel/Version/components 0.05
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.13
102 TestFunctional/parallel/DockerEnv/bash 0.05
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
106 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.32
107 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
108 TestFunctional/parallel/ServiceCmd/List 0.05
109 TestFunctional/parallel/ServiceCmd/JSONOutput 0.05
110 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
111 TestFunctional/parallel/ServiceCmd/Format 0.05
112 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.33
113 TestFunctional/parallel/ServiceCmd/URL 0.05
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 98.07
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.08
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 30.06
141 TestMultiControlPlane/serial/StartCluster 9.97
142 TestMultiControlPlane/serial/DeployApp 90.27
143 TestMultiControlPlane/serial/PingHostFromPods 0.1
144 TestMultiControlPlane/serial/AddWorkerNode 0.08
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.09
147 TestMultiControlPlane/serial/CopyFile 0.07
148 TestMultiControlPlane/serial/StopSecondaryNode 0.12
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.09
150 TestMultiControlPlane/serial/RestartSecondaryNode 42.88
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.09
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 7.25
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.12
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.09
155 TestMultiControlPlane/serial/StopCluster 3.91
156 TestMultiControlPlane/serial/RestartCluster 5.27
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.09
158 TestMultiControlPlane/serial/AddSecondaryNode 0.08
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.09
162 TestImageBuild/serial/Setup 9.92
165 TestJSONOutput/start/Command 9.74
171 TestJSONOutput/pause/Command 0.08
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.08
197 TestMountStart/serial/StartWithMountFirst 10.54
200 TestMultiNode/serial/FreshStart2Nodes 9.9
201 TestMultiNode/serial/DeployApp2Nodes 93.04
202 TestMultiNode/serial/PingHostFrom2Pods 0.1
203 TestMultiNode/serial/AddNode 0.08
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.09
206 TestMultiNode/serial/CopyFile 0.07
207 TestMultiNode/serial/StopNode 0.16
208 TestMultiNode/serial/StartAfterStop 45.54
209 TestMultiNode/serial/RestartKeepsNodes 7.45
210 TestMultiNode/serial/DeleteNode 0.12
211 TestMultiNode/serial/StopMultiNode 3.64
212 TestMultiNode/serial/RestartMultiNode 5.27
213 TestMultiNode/serial/ValidateNameConflict 20.36
217 TestPreload 9.92
219 TestScheduledStopUnix 9.88
220 TestSkaffold 16.22
223 TestRunningBinaryUpgrade 622.1
225 TestKubernetesUpgrade 17.32
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 0.96
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 0.96
241 TestStoppedBinaryUpgrade/Upgrade 575.54
243 TestPause/serial/Start 9.96
253 TestNoKubernetes/serial/StartWithK8s 9.78
254 TestNoKubernetes/serial/StartWithStopK8s 5.89
255 TestNoKubernetes/serial/Start 5.83
259 TestNoKubernetes/serial/StartNoArgs 5.89
261 TestNetworkPlugins/group/auto/Start 9.79
262 TestNetworkPlugins/group/kindnet/Start 9.79
263 TestNetworkPlugins/group/calico/Start 9.96
264 TestNetworkPlugins/group/custom-flannel/Start 9.84
265 TestNetworkPlugins/group/false/Start 9.87
266 TestNetworkPlugins/group/enable-default-cni/Start 9.76
267 TestNetworkPlugins/group/flannel/Start 9.89
269 TestNetworkPlugins/group/bridge/Start 9.78
270 TestNetworkPlugins/group/kubenet/Start 9.94
272 TestStartStop/group/old-k8s-version/serial/FirstStart 9.93
274 TestStartStop/group/no-preload/serial/FirstStart 9.94
275 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
276 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
279 TestStartStop/group/old-k8s-version/serial/SecondStart 5.8
280 TestStartStop/group/no-preload/serial/DeployApp 0.1
281 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
284 TestStartStop/group/no-preload/serial/SecondStart 5.27
285 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.04
286 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
287 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
288 TestStartStop/group/old-k8s-version/serial/Pause 0.11
290 TestStartStop/group/embed-certs/serial/FirstStart 10.09
291 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
292 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
293 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
294 TestStartStop/group/no-preload/serial/Pause 0.11
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10
297 TestStartStop/group/embed-certs/serial/DeployApp 0.1
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.11
302 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
303 TestStartStop/group/embed-certs/serial/SecondStart 5.27
306 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.27
307 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
308 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
309 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
310 TestStartStop/group/embed-certs/serial/Pause 0.11
312 TestStartStop/group/newest-cni/serial/FirstStart 10.12
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
314 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
315 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
316 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
321 TestStartStop/group/newest-cni/serial/SecondStart 5.28
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
325 TestStartStop/group/newest-cni/serial/Pause 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (39.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-839000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-839000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (39.647439375s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"74fb9366-b157-4e00-b689-eaa90581dcc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-839000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"aaab9061-d993-440b-80bf-ac466a707326","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18424"}}
	{"specversion":"1.0","id":"b7150cd6-d4aa-40a1-9064-9c0f1d487da6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig"}}
	{"specversion":"1.0","id":"619e9774-d74f-4538-a7f6-5b7e0d915ccd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"f180a806-296b-4b5f-bce7-1f5af4744a5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"60de8e02-dd89-48c3-83a7-f1d5aaa3ed94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube"}}
	{"specversion":"1.0","id":"2cc9611e-e1e1-4098-8186-d11c41bd8b14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"45b088d9-b84e-49d2-b305-04dfe3338985","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c61e873a-ebd3-4a56-94a9-71ab5e07f2c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"89e48480-cf5b-4d5c-9f98-d76746b2bfec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ecd574a4-1ef3-48dd-8ffb-e56e5b6f825b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-839000\" primary control-plane node in \"download-only-839000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7b65e319-5f69-47aa-a08c-08c1729e2b43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8a49053d-a2c0-4bb4-98a1-75dc028eeab3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106234f60 0x106234f60 0x106234f60 0x106234f60 0x106234f60 0x106234f60 0x106234f60] Decompressors:map[bz2:0x14000800c40 gz:0x14000800c48 tar:0x14000800bc0 tar.bz2:0x14000800be0 tar.gz:0x14000800bf0 tar.xz:0x14000800c00 tar.zst:0x14000800c20 tbz2:0x14000800be0 tgz:0x1
4000800bf0 txz:0x14000800c00 tzst:0x14000800c20 xz:0x14000800c50 zip:0x14000800c60 zst:0x14000800c58] Getters:map[file:0x140014a0560 http:0x1400071c140 https:0x1400071c190] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"b99d44be-e8c5-43ee-a159-6d01d4bb4be6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:12:02.842563   11285 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:12:02.842743   11285 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:12:02.842748   11285 out.go:358] Setting ErrFile to fd 2...
	I1007 05:12:02.842750   11285 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:12:02.842928   11285 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	W1007 05:12:02.843013   11285 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18424-10771/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18424-10771/.minikube/config/config.json: no such file or directory
	I1007 05:12:02.844863   11285 out.go:352] Setting JSON to true
	I1007 05:12:02.865401   11285 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6093,"bootTime":1728297029,"procs":526,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:12:02.865483   11285 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:12:02.870953   11285 out.go:97] [download-only-839000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	W1007 05:12:02.871202   11285 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball: no such file or directory
	I1007 05:12:02.871166   11285 notify.go:220] Checking for updates...
	I1007 05:12:02.874928   11285 out.go:169] MINIKUBE_LOCATION=18424
	I1007 05:12:02.884979   11285 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:12:02.891955   11285 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:12:02.902912   11285 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:12:02.911751   11285 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	W1007 05:12:02.919954   11285 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1007 05:12:02.920261   11285 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:12:02.923919   11285 out.go:97] Using the qemu2 driver based on user configuration
	I1007 05:12:02.923942   11285 start.go:297] selected driver: qemu2
	I1007 05:12:02.923960   11285 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:12:02.924049   11285 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:12:02.926895   11285 out.go:169] Automatically selected the socket_vmnet network
	I1007 05:12:02.933942   11285 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1007 05:12:02.934046   11285 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 05:12:02.934090   11285 cni.go:84] Creating CNI manager for ""
	I1007 05:12:02.934129   11285 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1007 05:12:02.934198   11285 start.go:340] cluster config:
	{Name:download-only-839000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-839000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:12:02.939417   11285 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:12:02.943957   11285 out.go:97] Downloading VM boot image ...
	I1007 05:12:02.943978   11285 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso
	I1007 05:12:22.092074   11285 out.go:97] Starting "download-only-839000" primary control-plane node in "download-only-839000" cluster
	I1007 05:12:22.092110   11285 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1007 05:12:22.370915   11285 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1007 05:12:22.370975   11285 cache.go:56] Caching tarball of preloaded images
	I1007 05:12:22.371808   11285 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1007 05:12:22.375889   11285 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1007 05:12:22.375912   11285 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1007 05:12:22.934822   11285 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1007 05:12:41.131754   11285 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1007 05:12:41.131929   11285 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1007 05:12:41.826141   11285 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1007 05:12:41.826343   11285 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/download-only-839000/config.json ...
	I1007 05:12:41.826362   11285 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/download-only-839000/config.json: {Name:mk943d696e5b531ba5c348b81f378c7e975b4cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:12:41.826628   11285 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1007 05:12:41.826863   11285 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1007 05:12:42.405224   11285 out.go:193] 
	W1007 05:12:42.408338   11285 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106234f60 0x106234f60 0x106234f60 0x106234f60 0x106234f60 0x106234f60 0x106234f60] Decompressors:map[bz2:0x14000800c40 gz:0x14000800c48 tar:0x14000800bc0 tar.bz2:0x14000800be0 tar.gz:0x14000800bf0 tar.xz:0x14000800c00 tar.zst:0x14000800c20 tbz2:0x14000800be0 tgz:0x14000800bf0 txz:0x14000800c00 tzst:0x14000800c20 xz:0x14000800c50 zip:0x14000800c60 zst:0x14000800c58] Getters:map[file:0x140014a0560 http:0x1400071c140 https:0x1400071c190] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1007 05:12:42.408362   11285 out_reason.go:110] 
	W1007 05:12:42.415234   11285 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:12:42.419256   11285 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-839000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (39.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.97s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-621000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-621000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.811859083s)

                                                
                                                
-- stdout --
	* [offline-docker-621000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-621000" primary control-plane node in "offline-docker-621000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-621000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:23:40.206777   12755 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:23:40.206944   12755 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:23:40.206950   12755 out.go:358] Setting ErrFile to fd 2...
	I1007 05:23:40.206952   12755 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:23:40.207091   12755 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:23:40.208404   12755 out.go:352] Setting JSON to false
	I1007 05:23:40.228005   12755 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6791,"bootTime":1728297029,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:23:40.228110   12755 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:23:40.232594   12755 out.go:177] * [offline-docker-621000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:23:40.239533   12755 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:23:40.239564   12755 notify.go:220] Checking for updates...
	I1007 05:23:40.246452   12755 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:23:40.249464   12755 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:23:40.252525   12755 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:23:40.255451   12755 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:23:40.258423   12755 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:23:40.261825   12755 config.go:182] Loaded profile config "multinode-062000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:23:40.261873   12755 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:23:40.264366   12755 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:23:40.271438   12755 start.go:297] selected driver: qemu2
	I1007 05:23:40.271448   12755 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:23:40.271456   12755 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:23:40.273733   12755 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:23:40.274973   12755 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:23:40.277494   12755 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:23:40.277510   12755 cni.go:84] Creating CNI manager for ""
	I1007 05:23:40.277530   12755 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:23:40.277537   12755 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 05:23:40.277577   12755 start.go:340] cluster config:
	{Name:offline-docker-621000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-621000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/b
in/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:23:40.281915   12755 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:23:40.286427   12755 out.go:177] * Starting "offline-docker-621000" primary control-plane node in "offline-docker-621000" cluster
	I1007 05:23:40.294424   12755 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:23:40.294458   12755 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:23:40.294466   12755 cache.go:56] Caching tarball of preloaded images
	I1007 05:23:40.294561   12755 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:23:40.294566   12755 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:23:40.294640   12755 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/offline-docker-621000/config.json ...
	I1007 05:23:40.294654   12755 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/offline-docker-621000/config.json: {Name:mkc8d975549d2f8dace10c2246283510cbfd6633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:23:40.294915   12755 start.go:360] acquireMachinesLock for offline-docker-621000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:23:40.294959   12755 start.go:364] duration metric: took 38.25µs to acquireMachinesLock for "offline-docker-621000"
	I1007 05:23:40.294972   12755 start.go:93] Provisioning new machine with config: &{Name:offline-docker-621000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-621000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:23:40.294998   12755 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:23:40.299349   12755 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1007 05:23:40.314807   12755 start.go:159] libmachine.API.Create for "offline-docker-621000" (driver="qemu2")
	I1007 05:23:40.314834   12755 client.go:168] LocalClient.Create starting
	I1007 05:23:40.314905   12755 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:23:40.314941   12755 main.go:141] libmachine: Decoding PEM data...
	I1007 05:23:40.314951   12755 main.go:141] libmachine: Parsing certificate...
	I1007 05:23:40.315000   12755 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:23:40.315028   12755 main.go:141] libmachine: Decoding PEM data...
	I1007 05:23:40.315035   12755 main.go:141] libmachine: Parsing certificate...
	I1007 05:23:40.315410   12755 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:23:40.460161   12755 main.go:141] libmachine: Creating SSH key...
	I1007 05:23:40.560100   12755 main.go:141] libmachine: Creating Disk image...
	I1007 05:23:40.560108   12755 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:23:40.560287   12755 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/offline-docker-621000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/offline-docker-621000/disk.qcow2
	I1007 05:23:40.570656   12755 main.go:141] libmachine: STDOUT: 
	I1007 05:23:40.570701   12755 main.go:141] libmachine: STDERR: 
	I1007 05:23:40.570793   12755 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/offline-docker-621000/disk.qcow2 +20000M
	I1007 05:23:40.580415   12755 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:23:40.580460   12755 main.go:141] libmachine: STDERR: 
	I1007 05:23:40.580480   12755 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/offline-docker-621000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/offline-docker-621000/disk.qcow2
	I1007 05:23:40.580486   12755 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:23:40.580498   12755 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:23:40.580525   12755 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/offline-docker-621000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/offline-docker-621000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/offline-docker-621000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:6b:89:93:d7:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/offline-docker-621000/disk.qcow2
	I1007 05:23:40.582648   12755 main.go:141] libmachine: STDOUT: 
	I1007 05:23:40.582668   12755 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:23:40.582687   12755 client.go:171] duration metric: took 267.851708ms to LocalClient.Create
	I1007 05:23:42.582829   12755 start.go:128] duration metric: took 2.287866333s to createHost
	I1007 05:23:42.582844   12755 start.go:83] releasing machines lock for "offline-docker-621000", held for 2.287922791s
	W1007 05:23:42.582853   12755 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:23:42.587851   12755 out.go:177] * Deleting "offline-docker-621000" in qemu2 ...
	W1007 05:23:42.600391   12755 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:23:42.600402   12755 start.go:729] Will try again in 5 seconds ...
	I1007 05:23:47.602564   12755 start.go:360] acquireMachinesLock for offline-docker-621000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:23:47.603040   12755 start.go:364] duration metric: took 388.917µs to acquireMachinesLock for "offline-docker-621000"
	I1007 05:23:47.603194   12755 start.go:93] Provisioning new machine with config: &{Name:offline-docker-621000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-621000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:23:47.603506   12755 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:23:47.614164   12755 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1007 05:23:47.663049   12755 start.go:159] libmachine.API.Create for "offline-docker-621000" (driver="qemu2")
	I1007 05:23:47.663120   12755 client.go:168] LocalClient.Create starting
	I1007 05:23:47.663357   12755 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:23:47.663446   12755 main.go:141] libmachine: Decoding PEM data...
	I1007 05:23:47.663465   12755 main.go:141] libmachine: Parsing certificate...
	I1007 05:23:47.663549   12755 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:23:47.663614   12755 main.go:141] libmachine: Decoding PEM data...
	I1007 05:23:47.663627   12755 main.go:141] libmachine: Parsing certificate...
	I1007 05:23:47.664226   12755 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:23:47.817144   12755 main.go:141] libmachine: Creating SSH key...
	I1007 05:23:47.914250   12755 main.go:141] libmachine: Creating Disk image...
	I1007 05:23:47.914256   12755 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:23:47.914439   12755 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/offline-docker-621000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/offline-docker-621000/disk.qcow2
	I1007 05:23:47.924250   12755 main.go:141] libmachine: STDOUT: 
	I1007 05:23:47.924272   12755 main.go:141] libmachine: STDERR: 
	I1007 05:23:47.924328   12755 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/offline-docker-621000/disk.qcow2 +20000M
	I1007 05:23:47.932730   12755 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:23:47.932744   12755 main.go:141] libmachine: STDERR: 
	I1007 05:23:47.932761   12755 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/offline-docker-621000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/offline-docker-621000/disk.qcow2
	I1007 05:23:47.932767   12755 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:23:47.932774   12755 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:23:47.932803   12755 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/offline-docker-621000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/offline-docker-621000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/offline-docker-621000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:b6:f9:b9:72:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/offline-docker-621000/disk.qcow2
	I1007 05:23:47.934621   12755 main.go:141] libmachine: STDOUT: 
	I1007 05:23:47.934635   12755 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:23:47.934647   12755 client.go:171] duration metric: took 271.504167ms to LocalClient.Create
	I1007 05:23:49.936815   12755 start.go:128] duration metric: took 2.333315125s to createHost
	I1007 05:23:49.936955   12755 start.go:83] releasing machines lock for "offline-docker-621000", held for 2.3339285s
	W1007 05:23:49.937352   12755 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-621000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-621000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:23:49.952070   12755 out.go:201] 
	W1007 05:23:49.956158   12755 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:23:49.956222   12755 out.go:270] * 
	* 
	W1007 05:23:49.959261   12755 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:23:49.970021   12755 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-621000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-10-07 05:23:49.985659 -0700 PDT m=+707.271761626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-621000 -n offline-docker-621000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-621000 -n offline-docker-621000: exit status 7 (70.334833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-621000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-621000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-621000
--- FAIL: TestOffline (9.97s)

                                                
                                    
x
+
TestAddons/Setup (9.96s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-708000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-708000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (9.961300667s)

                                                
                                                
-- stdout --
	* [addons-708000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-708000" primary control-plane node in "addons-708000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-708000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:13:02.305544   11368 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:13:02.305776   11368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:13:02.305779   11368 out.go:358] Setting ErrFile to fd 2...
	I1007 05:13:02.305781   11368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:13:02.305916   11368 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:13:02.307082   11368 out.go:352] Setting JSON to false
	I1007 05:13:02.324756   11368 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6153,"bootTime":1728297029,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:13:02.324843   11368 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:13:02.329265   11368 out.go:177] * [addons-708000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:13:02.336262   11368 notify.go:220] Checking for updates...
	I1007 05:13:02.340253   11368 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:13:02.343214   11368 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:13:02.346248   11368 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:13:02.349317   11368 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:13:02.350745   11368 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:13:02.354218   11368 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:13:02.357429   11368 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:13:02.361102   11368 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:13:02.368266   11368 start.go:297] selected driver: qemu2
	I1007 05:13:02.368272   11368 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:13:02.368279   11368 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:13:02.370799   11368 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:13:02.374229   11368 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:13:02.377304   11368 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:13:02.377333   11368 cni.go:84] Creating CNI manager for ""
	I1007 05:13:02.377353   11368 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:13:02.377357   11368 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 05:13:02.377402   11368 start.go:340] cluster config:
	{Name:addons-708000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-708000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:13:02.382018   11368 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:13:02.388181   11368 out.go:177] * Starting "addons-708000" primary control-plane node in "addons-708000" cluster
	I1007 05:13:02.392251   11368 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:13:02.392273   11368 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:13:02.392280   11368 cache.go:56] Caching tarball of preloaded images
	I1007 05:13:02.392373   11368 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:13:02.392379   11368 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:13:02.392632   11368 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/addons-708000/config.json ...
	I1007 05:13:02.392647   11368 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/addons-708000/config.json: {Name:mk96d7e4b6c74c593bb16f2337cbd06deedc10a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:13:02.393013   11368 start.go:360] acquireMachinesLock for addons-708000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:13:02.393109   11368 start.go:364] duration metric: took 89.459µs to acquireMachinesLock for "addons-708000"
	I1007 05:13:02.393121   11368 start.go:93] Provisioning new machine with config: &{Name:addons-708000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:addons-708000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:13:02.393155   11368 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:13:02.400279   11368 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1007 05:13:02.417953   11368 start.go:159] libmachine.API.Create for "addons-708000" (driver="qemu2")
	I1007 05:13:02.418006   11368 client.go:168] LocalClient.Create starting
	I1007 05:13:02.418152   11368 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:13:02.502715   11368 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:13:02.577854   11368 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:13:02.719235   11368 main.go:141] libmachine: Creating SSH key...
	I1007 05:13:02.762453   11368 main.go:141] libmachine: Creating Disk image...
	I1007 05:13:02.762458   11368 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:13:02.762659   11368 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/addons-708000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/addons-708000/disk.qcow2
	I1007 05:13:02.772537   11368 main.go:141] libmachine: STDOUT: 
	I1007 05:13:02.772553   11368 main.go:141] libmachine: STDERR: 
	I1007 05:13:02.772608   11368 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/addons-708000/disk.qcow2 +20000M
	I1007 05:13:02.781091   11368 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:13:02.781110   11368 main.go:141] libmachine: STDERR: 
	I1007 05:13:02.781121   11368 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/addons-708000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/addons-708000/disk.qcow2
	I1007 05:13:02.781127   11368 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:13:02.781168   11368 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:13:02.781199   11368 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/addons-708000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/addons-708000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/addons-708000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:13:87:4f:e1:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/addons-708000/disk.qcow2
	I1007 05:13:02.782987   11368 main.go:141] libmachine: STDOUT: 
	I1007 05:13:02.783002   11368 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:13:02.783031   11368 client.go:171] duration metric: took 365.009833ms to LocalClient.Create
	I1007 05:13:04.785268   11368 start.go:128] duration metric: took 2.392089708s to createHost
	I1007 05:13:04.785351   11368 start.go:83] releasing machines lock for "addons-708000", held for 2.392238667s
	W1007 05:13:04.785426   11368 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:13:04.792594   11368 out.go:177] * Deleting "addons-708000" in qemu2 ...
	W1007 05:13:04.817396   11368 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:13:04.817424   11368 start.go:729] Will try again in 5 seconds ...
	I1007 05:13:09.819683   11368 start.go:360] acquireMachinesLock for addons-708000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:13:09.820384   11368 start.go:364] duration metric: took 539.625µs to acquireMachinesLock for "addons-708000"
	I1007 05:13:09.820559   11368 start.go:93] Provisioning new machine with config: &{Name:addons-708000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:addons-708000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:13:09.820863   11368 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:13:09.834692   11368 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1007 05:13:09.885951   11368 start.go:159] libmachine.API.Create for "addons-708000" (driver="qemu2")
	I1007 05:13:09.886021   11368 client.go:168] LocalClient.Create starting
	I1007 05:13:09.886192   11368 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:13:09.886275   11368 main.go:141] libmachine: Decoding PEM data...
	I1007 05:13:09.886298   11368 main.go:141] libmachine: Parsing certificate...
	I1007 05:13:09.886407   11368 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:13:09.886475   11368 main.go:141] libmachine: Decoding PEM data...
	I1007 05:13:09.886493   11368 main.go:141] libmachine: Parsing certificate...
	I1007 05:13:09.887219   11368 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:13:10.042007   11368 main.go:141] libmachine: Creating SSH key...
	I1007 05:13:10.167306   11368 main.go:141] libmachine: Creating Disk image...
	I1007 05:13:10.167311   11368 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:13:10.167501   11368 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/addons-708000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/addons-708000/disk.qcow2
	I1007 05:13:10.177832   11368 main.go:141] libmachine: STDOUT: 
	I1007 05:13:10.177902   11368 main.go:141] libmachine: STDERR: 
	I1007 05:13:10.177958   11368 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/addons-708000/disk.qcow2 +20000M
	I1007 05:13:10.186411   11368 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:13:10.186428   11368 main.go:141] libmachine: STDERR: 
	I1007 05:13:10.186446   11368 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/addons-708000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/addons-708000/disk.qcow2
	I1007 05:13:10.186451   11368 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:13:10.186459   11368 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:13:10.186497   11368 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/addons-708000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/addons-708000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/addons-708000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:2f:f5:ae:8d:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/addons-708000/disk.qcow2
	I1007 05:13:10.188322   11368 main.go:141] libmachine: STDOUT: 
	I1007 05:13:10.188384   11368 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:13:10.188398   11368 client.go:171] duration metric: took 302.373333ms to LocalClient.Create
	I1007 05:13:12.190570   11368 start.go:128] duration metric: took 2.369684584s to createHost
	I1007 05:13:12.190633   11368 start.go:83] releasing machines lock for "addons-708000", held for 2.370227958s
	W1007 05:13:12.190942   11368 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p addons-708000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-708000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:13:12.202533   11368 out.go:201] 
	W1007 05:13:12.206696   11368 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:13:12.206721   11368 out.go:270] * 
	* 
	W1007 05:13:12.209431   11368 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:13:12.219534   11368 out.go:201] 

                                                
                                                
** /stderr **
addons_test.go:109: out/minikube-darwin-arm64 start -p addons-708000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (9.96s)

                                                
                                    
x
+
TestCertOptions (10.16s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-516000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-516000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.883402541s)

                                                
                                                
-- stdout --
	* [cert-options-516000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-516000" primary control-plane node in "cert-options-516000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-516000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-516000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-516000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-516000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-516000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (85.734834ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-516000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-516000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-516000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-516000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-516000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-516000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (46.220625ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-516000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-516000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-516000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-516000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-516000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-10-07 05:24:20.775632 -0700 PDT m=+738.062304710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-516000 -n cert-options-516000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-516000 -n cert-options-516000: exit status 7 (35.415958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-516000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-516000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-516000
--- FAIL: TestCertOptions (10.16s)

                                                
                                    
x
+
TestCertExpiration (195.19s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-719000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-719000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.801589583s)

                                                
                                                
-- stdout --
	* [cert-expiration-719000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-719000" primary control-plane node in "cert-expiration-719000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-719000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-719000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-719000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-719000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-719000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.233133083s)

                                                
                                                
-- stdout --
	* [cert-expiration-719000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-719000" primary control-plane node in "cert-expiration-719000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-719000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-719000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-719000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-719000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-719000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-719000" primary control-plane node in "cert-expiration-719000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-719000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-719000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-719000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-10-07 05:27:20.716639 -0700 PDT m=+918.006641876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-719000 -n cert-expiration-719000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-719000 -n cert-expiration-719000: exit status 7 (66.200833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-719000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-719000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-719000
--- FAIL: TestCertExpiration (195.19s)

                                                
                                    
x
+
TestDockerFlags (10.13s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-871000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-871000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.882799417s)

                                                
                                                
-- stdout --
	* [docker-flags-871000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-871000" primary control-plane node in "docker-flags-871000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-871000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:24:00.632958   12956 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:24:00.633121   12956 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:24:00.633125   12956 out.go:358] Setting ErrFile to fd 2...
	I1007 05:24:00.633127   12956 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:24:00.633252   12956 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:24:00.634481   12956 out.go:352] Setting JSON to false
	I1007 05:24:00.651968   12956 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6811,"bootTime":1728297029,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:24:00.652041   12956 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:24:00.658056   12956 out.go:177] * [docker-flags-871000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:24:00.664970   12956 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:24:00.665028   12956 notify.go:220] Checking for updates...
	I1007 05:24:00.671903   12956 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:24:00.674960   12956 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:24:00.678010   12956 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:24:00.680928   12956 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:24:00.683973   12956 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:24:00.687387   12956 config.go:182] Loaded profile config "force-systemd-flag-772000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:24:00.687463   12956 config.go:182] Loaded profile config "multinode-062000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:24:00.687513   12956 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:24:00.690894   12956 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:24:00.698034   12956 start.go:297] selected driver: qemu2
	I1007 05:24:00.698039   12956 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:24:00.698045   12956 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:24:00.700519   12956 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:24:00.702104   12956 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:24:00.705046   12956 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1007 05:24:00.705063   12956 cni.go:84] Creating CNI manager for ""
	I1007 05:24:00.705085   12956 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:24:00.705090   12956 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 05:24:00.705131   12956 start.go:340] cluster config:
	{Name:docker-flags-871000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVM
netClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:24:00.709705   12956 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:24:00.717867   12956 out.go:177] * Starting "docker-flags-871000" primary control-plane node in "docker-flags-871000" cluster
	I1007 05:24:00.721997   12956 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:24:00.722014   12956 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:24:00.722024   12956 cache.go:56] Caching tarball of preloaded images
	I1007 05:24:00.722126   12956 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:24:00.722132   12956 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:24:00.722217   12956 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/docker-flags-871000/config.json ...
	I1007 05:24:00.722228   12956 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/docker-flags-871000/config.json: {Name:mk06109741c41dfca33acc46763808ae939be150 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:24:00.722596   12956 start.go:360] acquireMachinesLock for docker-flags-871000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:24:00.722648   12956 start.go:364] duration metric: took 45.75µs to acquireMachinesLock for "docker-flags-871000"
	I1007 05:24:00.722661   12956 start.go:93] Provisioning new machine with config: &{Name:docker-flags-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKe
y: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:24:00.722691   12956 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:24:00.726939   12956 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1007 05:24:00.744394   12956 start.go:159] libmachine.API.Create for "docker-flags-871000" (driver="qemu2")
	I1007 05:24:00.744425   12956 client.go:168] LocalClient.Create starting
	I1007 05:24:00.744489   12956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:24:00.744525   12956 main.go:141] libmachine: Decoding PEM data...
	I1007 05:24:00.744537   12956 main.go:141] libmachine: Parsing certificate...
	I1007 05:24:00.744585   12956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:24:00.744616   12956 main.go:141] libmachine: Decoding PEM data...
	I1007 05:24:00.744623   12956 main.go:141] libmachine: Parsing certificate...
	I1007 05:24:00.744999   12956 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:24:00.887789   12956 main.go:141] libmachine: Creating SSH key...
	I1007 05:24:01.034656   12956 main.go:141] libmachine: Creating Disk image...
	I1007 05:24:01.034668   12956 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:24:01.034863   12956 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/docker-flags-871000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/docker-flags-871000/disk.qcow2
	I1007 05:24:01.045008   12956 main.go:141] libmachine: STDOUT: 
	I1007 05:24:01.045078   12956 main.go:141] libmachine: STDERR: 
	I1007 05:24:01.045133   12956 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/docker-flags-871000/disk.qcow2 +20000M
	I1007 05:24:01.053848   12956 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:24:01.053874   12956 main.go:141] libmachine: STDERR: 
	I1007 05:24:01.053889   12956 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/docker-flags-871000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/docker-flags-871000/disk.qcow2
	I1007 05:24:01.053895   12956 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:24:01.053906   12956 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:24:01.053941   12956 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/docker-flags-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/docker-flags-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/docker-flags-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:10:e3:11:5f:f6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/docker-flags-871000/disk.qcow2
	I1007 05:24:01.055840   12956 main.go:141] libmachine: STDOUT: 
	I1007 05:24:01.055855   12956 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:24:01.055877   12956 client.go:171] duration metric: took 311.449666ms to LocalClient.Create
	I1007 05:24:03.058085   12956 start.go:128] duration metric: took 2.335403625s to createHost
	I1007 05:24:03.058147   12956 start.go:83] releasing machines lock for "docker-flags-871000", held for 2.335531458s
	W1007 05:24:03.058189   12956 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:24:03.071303   12956 out.go:177] * Deleting "docker-flags-871000" in qemu2 ...
	W1007 05:24:03.091868   12956 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:24:03.091904   12956 start.go:729] Will try again in 5 seconds ...
	I1007 05:24:08.093967   12956 start.go:360] acquireMachinesLock for docker-flags-871000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:24:08.094384   12956 start.go:364] duration metric: took 350.583µs to acquireMachinesLock for "docker-flags-871000"
	I1007 05:24:08.094480   12956 start.go:93] Provisioning new machine with config: &{Name:docker-flags-871000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKe
y: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-871000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:24:08.094784   12956 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:24:08.102637   12956 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1007 05:24:08.152712   12956 start.go:159] libmachine.API.Create for "docker-flags-871000" (driver="qemu2")
	I1007 05:24:08.152774   12956 client.go:168] LocalClient.Create starting
	I1007 05:24:08.152911   12956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:24:08.152992   12956 main.go:141] libmachine: Decoding PEM data...
	I1007 05:24:08.153008   12956 main.go:141] libmachine: Parsing certificate...
	I1007 05:24:08.153074   12956 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:24:08.153131   12956 main.go:141] libmachine: Decoding PEM data...
	I1007 05:24:08.153152   12956 main.go:141] libmachine: Parsing certificate...
	I1007 05:24:08.153950   12956 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:24:08.311143   12956 main.go:141] libmachine: Creating SSH key...
	I1007 05:24:08.416889   12956 main.go:141] libmachine: Creating Disk image...
	I1007 05:24:08.416895   12956 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:24:08.417070   12956 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/docker-flags-871000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/docker-flags-871000/disk.qcow2
	I1007 05:24:08.426845   12956 main.go:141] libmachine: STDOUT: 
	I1007 05:24:08.426867   12956 main.go:141] libmachine: STDERR: 
	I1007 05:24:08.426924   12956 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/docker-flags-871000/disk.qcow2 +20000M
	I1007 05:24:08.435237   12956 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:24:08.435252   12956 main.go:141] libmachine: STDERR: 
	I1007 05:24:08.435266   12956 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/docker-flags-871000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/docker-flags-871000/disk.qcow2
	I1007 05:24:08.435271   12956 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:24:08.435280   12956 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:24:08.435323   12956 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/docker-flags-871000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/docker-flags-871000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/docker-flags-871000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:34:1b:88:de:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/docker-flags-871000/disk.qcow2
	I1007 05:24:08.437159   12956 main.go:141] libmachine: STDOUT: 
	I1007 05:24:08.437174   12956 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:24:08.437189   12956 client.go:171] duration metric: took 284.41625ms to LocalClient.Create
	I1007 05:24:10.439338   12956 start.go:128] duration metric: took 2.344566417s to createHost
	I1007 05:24:10.439397   12956 start.go:83] releasing machines lock for "docker-flags-871000", held for 2.345032083s
	W1007 05:24:10.439888   12956 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-871000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:24:10.452553   12956 out.go:201] 
	W1007 05:24:10.456642   12956 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:24:10.456669   12956 out.go:270] * 
	* 
	W1007 05:24:10.459553   12956 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:24:10.468511   12956 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-871000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-871000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-871000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.797125ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-871000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-871000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-871000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-871000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-871000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-871000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-871000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-871000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-871000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (46.566167ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-871000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-871000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-871000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-871000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-871000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-871000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-10-07 05:24:10.61309 -0700 PDT m=+727.899574376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-871000 -n docker-flags-871000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-871000 -n docker-flags-871000: exit status 7 (33.51ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-871000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-871000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-871000
--- FAIL: TestDockerFlags (10.13s)

                                                
                                    
x
+
TestForceSystemdFlag (10.15s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-772000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-772000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.951844625s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-772000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-772000" primary control-plane node in "force-systemd-flag-772000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-772000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:23:55.560551   12934 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:23:55.560708   12934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:23:55.560718   12934 out.go:358] Setting ErrFile to fd 2...
	I1007 05:23:55.560722   12934 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:23:55.560882   12934 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:23:55.562066   12934 out.go:352] Setting JSON to false
	I1007 05:23:55.579948   12934 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6806,"bootTime":1728297029,"procs":526,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:23:55.580013   12934 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:23:55.586074   12934 out.go:177] * [force-systemd-flag-772000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:23:55.597977   12934 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:23:55.597995   12934 notify.go:220] Checking for updates...
	I1007 05:23:55.604966   12934 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:23:55.609014   12934 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:23:55.611977   12934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:23:55.615009   12934 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:23:55.617985   12934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:23:55.621354   12934 config.go:182] Loaded profile config "force-systemd-env-838000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:23:55.621433   12934 config.go:182] Loaded profile config "multinode-062000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:23:55.621492   12934 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:23:55.625893   12934 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:23:55.632997   12934 start.go:297] selected driver: qemu2
	I1007 05:23:55.633003   12934 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:23:55.633012   12934 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:23:55.635447   12934 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:23:55.639112   12934 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:23:55.642007   12934 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 05:23:55.642021   12934 cni.go:84] Creating CNI manager for ""
	I1007 05:23:55.642046   12934 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:23:55.642053   12934 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 05:23:55.642081   12934 start.go:340] cluster config:
	{Name:force-systemd-flag-772000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:23:55.647161   12934 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:23:55.654028   12934 out.go:177] * Starting "force-systemd-flag-772000" primary control-plane node in "force-systemd-flag-772000" cluster
	I1007 05:23:55.657954   12934 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:23:55.657972   12934 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:23:55.657983   12934 cache.go:56] Caching tarball of preloaded images
	I1007 05:23:55.658086   12934 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:23:55.658093   12934 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:23:55.658169   12934 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/force-systemd-flag-772000/config.json ...
	I1007 05:23:55.658181   12934 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/force-systemd-flag-772000/config.json: {Name:mk6ec946b5440cd86706f1bb1e4ae59b2e98795f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:23:55.658583   12934 start.go:360] acquireMachinesLock for force-systemd-flag-772000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:23:55.658641   12934 start.go:364] duration metric: took 49.333µs to acquireMachinesLock for "force-systemd-flag-772000"
	I1007 05:23:55.658656   12934 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:23:55.658687   12934 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:23:55.661978   12934 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1007 05:23:55.680294   12934 start.go:159] libmachine.API.Create for "force-systemd-flag-772000" (driver="qemu2")
	I1007 05:23:55.680323   12934 client.go:168] LocalClient.Create starting
	I1007 05:23:55.680388   12934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:23:55.680433   12934 main.go:141] libmachine: Decoding PEM data...
	I1007 05:23:55.680447   12934 main.go:141] libmachine: Parsing certificate...
	I1007 05:23:55.680491   12934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:23:55.680534   12934 main.go:141] libmachine: Decoding PEM data...
	I1007 05:23:55.680546   12934 main.go:141] libmachine: Parsing certificate...
	I1007 05:23:55.680988   12934 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:23:55.822190   12934 main.go:141] libmachine: Creating SSH key...
	I1007 05:23:56.032072   12934 main.go:141] libmachine: Creating Disk image...
	I1007 05:23:56.032086   12934 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:23:56.032304   12934 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-flag-772000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-flag-772000/disk.qcow2
	I1007 05:23:56.042832   12934 main.go:141] libmachine: STDOUT: 
	I1007 05:23:56.042854   12934 main.go:141] libmachine: STDERR: 
	I1007 05:23:56.042915   12934 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-flag-772000/disk.qcow2 +20000M
	I1007 05:23:56.051403   12934 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:23:56.051424   12934 main.go:141] libmachine: STDERR: 
	I1007 05:23:56.051438   12934 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-flag-772000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-flag-772000/disk.qcow2
	I1007 05:23:56.051445   12934 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:23:56.051457   12934 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:23:56.051486   12934 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-flag-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-flag-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-flag-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:f0:b3:cb:4a:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-flag-772000/disk.qcow2
	I1007 05:23:56.053275   12934 main.go:141] libmachine: STDOUT: 
	I1007 05:23:56.053289   12934 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:23:56.053309   12934 client.go:171] duration metric: took 372.98625ms to LocalClient.Create
	I1007 05:23:58.055533   12934 start.go:128] duration metric: took 2.396857875s to createHost
	I1007 05:23:58.055606   12934 start.go:83] releasing machines lock for "force-systemd-flag-772000", held for 2.396988709s
	W1007 05:23:58.055689   12934 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:23:58.078796   12934 out.go:177] * Deleting "force-systemd-flag-772000" in qemu2 ...
	W1007 05:23:58.098688   12934 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:23:58.098711   12934 start.go:729] Will try again in 5 seconds ...
	I1007 05:24:03.100739   12934 start.go:360] acquireMachinesLock for force-systemd-flag-772000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:24:03.101224   12934 start.go:364] duration metric: took 421.667µs to acquireMachinesLock for "force-systemd-flag-772000"
	I1007 05:24:03.101315   12934 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:24:03.101480   12934 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:24:03.108973   12934 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1007 05:24:03.150578   12934 start.go:159] libmachine.API.Create for "force-systemd-flag-772000" (driver="qemu2")
	I1007 05:24:03.150630   12934 client.go:168] LocalClient.Create starting
	I1007 05:24:03.150748   12934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:24:03.150835   12934 main.go:141] libmachine: Decoding PEM data...
	I1007 05:24:03.150854   12934 main.go:141] libmachine: Parsing certificate...
	I1007 05:24:03.150908   12934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:24:03.150965   12934 main.go:141] libmachine: Decoding PEM data...
	I1007 05:24:03.150982   12934 main.go:141] libmachine: Parsing certificate...
	I1007 05:24:03.151682   12934 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:24:03.304936   12934 main.go:141] libmachine: Creating SSH key...
	I1007 05:24:03.413559   12934 main.go:141] libmachine: Creating Disk image...
	I1007 05:24:03.413565   12934 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:24:03.413756   12934 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-flag-772000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-flag-772000/disk.qcow2
	I1007 05:24:03.423666   12934 main.go:141] libmachine: STDOUT: 
	I1007 05:24:03.423685   12934 main.go:141] libmachine: STDERR: 
	I1007 05:24:03.423753   12934 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-flag-772000/disk.qcow2 +20000M
	I1007 05:24:03.432328   12934 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:24:03.432351   12934 main.go:141] libmachine: STDERR: 
	I1007 05:24:03.432364   12934 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-flag-772000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-flag-772000/disk.qcow2
	I1007 05:24:03.432381   12934 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:24:03.432388   12934 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:24:03.432418   12934 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-flag-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-flag-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-flag-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:b9:8e:b1:12:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-flag-772000/disk.qcow2
	I1007 05:24:03.434262   12934 main.go:141] libmachine: STDOUT: 
	I1007 05:24:03.434278   12934 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:24:03.434291   12934 client.go:171] duration metric: took 283.65975ms to LocalClient.Create
	I1007 05:24:05.436572   12934 start.go:128] duration metric: took 2.335098584s to createHost
	I1007 05:24:05.436653   12934 start.go:83] releasing machines lock for "force-systemd-flag-772000", held for 2.335454125s
	W1007 05:24:05.437055   12934 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:24:05.448807   12934 out.go:201] 
	W1007 05:24:05.452764   12934 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:24:05.452797   12934 out.go:270] * 
	* 
	W1007 05:24:05.455524   12934 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:24:05.465795   12934 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-772000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-772000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-772000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (84.159584ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-772000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-772000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-772000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-10-07 05:24:05.5678 -0700 PDT m=+722.854191460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-772000 -n force-systemd-flag-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-772000 -n force-systemd-flag-772000: exit status 7 (36.411958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-772000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-772000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-772000
--- FAIL: TestForceSystemdFlag (10.15s)

                                                
                                    
x
+
TestForceSystemdEnv (10.47s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-838000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I1007 05:23:50.270595   11284 install.go:79] stdout: 
W1007 05:23:50.270726   11284 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate49856992/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate49856992/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1007 05:23:50.270747   11284 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate49856992/001/docker-machine-driver-hyperkit]
I1007 05:23:50.283581   11284 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate49856992/001/docker-machine-driver-hyperkit]
I1007 05:23:50.295089   11284 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate49856992/001/docker-machine-driver-hyperkit]
I1007 05:23:50.305823   11284 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate49856992/001/docker-machine-driver-hyperkit]
I1007 05:23:50.330799   11284 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1007 05:23:50.330964   11284 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I1007 05:23:52.172709   11284 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1007 05:23:52.172729   11284 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1007 05:23:52.172783   11284 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1007 05:23:52.172827   11284 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate49856992/002/docker-machine-driver-hyperkit
I1007 05:23:52.576050   11284 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate49856992/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1071fe380 0x1071fe380 0x1071fe380 0x1071fe380 0x1071fe380 0x1071fe380 0x1071fe380] Decompressors:map[bz2:0x1400051b620 gz:0x1400051b628 tar:0x1400051b5b0 tar.bz2:0x1400051b5c0 tar.gz:0x1400051b5e0 tar.xz:0x1400051b5f0 tar.zst:0x1400051b610 tbz2:0x1400051b5c0 tgz:0x1400051b5e0 txz:0x1400051b5f0 tzst:0x1400051b610 xz:0x1400051b630 zip:0x1400051b670 zst:0x1400051b638] Getters:map[file:0x1400152c110 http:0x1400059f540 https:0x1400059f590] Dir:false ProgressListener:<nil> Insecure:false DisableSyml
inks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1007 05:23:52.576200   11284 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate49856992/002/docker-machine-driver-hyperkit
I1007 05:23:55.474267   11284 install.go:79] stdout: 
W1007 05:23:55.474468   11284 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate49856992/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate49856992/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1007 05:23:55.474496   11284 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate49856992/002/docker-machine-driver-hyperkit]
I1007 05:23:55.492271   11284 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate49856992/002/docker-machine-driver-hyperkit]
I1007 05:23:55.506682   11284 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate49856992/002/docker-machine-driver-hyperkit]
I1007 05:23:55.517505   11284 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate49856992/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-838000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.260065541s)

                                                
                                                
-- stdout --
	* [force-systemd-env-838000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-838000" primary control-plane node in "force-systemd-env-838000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-838000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:23:50.170037   12899 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:23:50.170216   12899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:23:50.170219   12899 out.go:358] Setting ErrFile to fd 2...
	I1007 05:23:50.170221   12899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:23:50.170364   12899 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:23:50.171563   12899 out.go:352] Setting JSON to false
	I1007 05:23:50.190223   12899 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6801,"bootTime":1728297029,"procs":526,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:23:50.190292   12899 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:23:50.196112   12899 out.go:177] * [force-systemd-env-838000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:23:50.202108   12899 notify.go:220] Checking for updates...
	I1007 05:23:50.205971   12899 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:23:50.213988   12899 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:23:50.221962   12899 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:23:50.230944   12899 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:23:50.238801   12899 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:23:50.246977   12899 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1007 05:23:50.248713   12899 config.go:182] Loaded profile config "multinode-062000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:23:50.248767   12899 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:23:50.253019   12899 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:23:50.259845   12899 start.go:297] selected driver: qemu2
	I1007 05:23:50.259855   12899 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:23:50.259862   12899 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:23:50.262744   12899 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:23:50.265997   12899 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:23:50.270108   12899 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 05:23:50.270124   12899 cni.go:84] Creating CNI manager for ""
	I1007 05:23:50.270144   12899 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:23:50.270149   12899 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 05:23:50.270187   12899 start.go:340] cluster config:
	{Name:force-systemd-env-838000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:23:50.274472   12899 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:23:50.283014   12899 out.go:177] * Starting "force-systemd-env-838000" primary control-plane node in "force-systemd-env-838000" cluster
	I1007 05:23:50.286019   12899 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:23:50.286044   12899 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:23:50.286057   12899 cache.go:56] Caching tarball of preloaded images
	I1007 05:23:50.286164   12899 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:23:50.286169   12899 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:23:50.286239   12899 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/force-systemd-env-838000/config.json ...
	I1007 05:23:50.286251   12899 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/force-systemd-env-838000/config.json: {Name:mkba241b9fad292a545359582e371366896985e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:23:50.286484   12899 start.go:360] acquireMachinesLock for force-systemd-env-838000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:23:50.286532   12899 start.go:364] duration metric: took 40.083µs to acquireMachinesLock for "force-systemd-env-838000"
	I1007 05:23:50.286544   12899 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:23:50.286572   12899 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:23:50.293833   12899 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1007 05:23:50.308784   12899 start.go:159] libmachine.API.Create for "force-systemd-env-838000" (driver="qemu2")
	I1007 05:23:50.308816   12899 client.go:168] LocalClient.Create starting
	I1007 05:23:50.308895   12899 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:23:50.308932   12899 main.go:141] libmachine: Decoding PEM data...
	I1007 05:23:50.308944   12899 main.go:141] libmachine: Parsing certificate...
	I1007 05:23:50.308991   12899 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:23:50.309021   12899 main.go:141] libmachine: Decoding PEM data...
	I1007 05:23:50.309031   12899 main.go:141] libmachine: Parsing certificate...
	I1007 05:23:50.309419   12899 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:23:50.454634   12899 main.go:141] libmachine: Creating SSH key...
	I1007 05:23:50.588821   12899 main.go:141] libmachine: Creating Disk image...
	I1007 05:23:50.588830   12899 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:23:50.589026   12899 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-env-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-env-838000/disk.qcow2
	I1007 05:23:50.599420   12899 main.go:141] libmachine: STDOUT: 
	I1007 05:23:50.599448   12899 main.go:141] libmachine: STDERR: 
	I1007 05:23:50.599511   12899 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-env-838000/disk.qcow2 +20000M
	I1007 05:23:50.608212   12899 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:23:50.608307   12899 main.go:141] libmachine: STDERR: 
	I1007 05:23:50.608327   12899 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-env-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-env-838000/disk.qcow2
	I1007 05:23:50.608332   12899 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:23:50.608347   12899 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:23:50.608370   12899 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-env-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-env-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-env-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:63:5a:44:32:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-env-838000/disk.qcow2
	I1007 05:23:50.610271   12899 main.go:141] libmachine: STDOUT: 
	I1007 05:23:50.610306   12899 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:23:50.610333   12899 client.go:171] duration metric: took 301.516459ms to LocalClient.Create
	I1007 05:23:52.612581   12899 start.go:128] duration metric: took 2.326008542s to createHost
	I1007 05:23:52.612721   12899 start.go:83] releasing machines lock for "force-systemd-env-838000", held for 2.326221041s
	W1007 05:23:52.612775   12899 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:23:52.626815   12899 out.go:177] * Deleting "force-systemd-env-838000" in qemu2 ...
	W1007 05:23:52.649719   12899 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:23:52.649745   12899 start.go:729] Will try again in 5 seconds ...
	I1007 05:23:57.650372   12899 start.go:360] acquireMachinesLock for force-systemd-env-838000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:23:58.055758   12899 start.go:364] duration metric: took 405.30975ms to acquireMachinesLock for "force-systemd-env-838000"
	I1007 05:23:58.055896   12899 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-838000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-838000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:23:58.056142   12899 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:23:58.070740   12899 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1007 05:23:58.117561   12899 start.go:159] libmachine.API.Create for "force-systemd-env-838000" (driver="qemu2")
	I1007 05:23:58.117614   12899 client.go:168] LocalClient.Create starting
	I1007 05:23:58.117774   12899 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:23:58.117871   12899 main.go:141] libmachine: Decoding PEM data...
	I1007 05:23:58.117891   12899 main.go:141] libmachine: Parsing certificate...
	I1007 05:23:58.117960   12899 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:23:58.118019   12899 main.go:141] libmachine: Decoding PEM data...
	I1007 05:23:58.118029   12899 main.go:141] libmachine: Parsing certificate...
	I1007 05:23:58.118676   12899 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:23:58.272439   12899 main.go:141] libmachine: Creating SSH key...
	I1007 05:23:58.330934   12899 main.go:141] libmachine: Creating Disk image...
	I1007 05:23:58.330939   12899 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:23:58.331131   12899 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-env-838000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-env-838000/disk.qcow2
	I1007 05:23:58.341296   12899 main.go:141] libmachine: STDOUT: 
	I1007 05:23:58.341317   12899 main.go:141] libmachine: STDERR: 
	I1007 05:23:58.341405   12899 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-env-838000/disk.qcow2 +20000M
	I1007 05:23:58.349950   12899 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:23:58.349966   12899 main.go:141] libmachine: STDERR: 
	I1007 05:23:58.349979   12899 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-env-838000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-env-838000/disk.qcow2
	I1007 05:23:58.349984   12899 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:23:58.349995   12899 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:23:58.350027   12899 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-env-838000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-env-838000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-env-838000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:90:72:89:f5:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/force-systemd-env-838000/disk.qcow2
	I1007 05:23:58.351788   12899 main.go:141] libmachine: STDOUT: 
	I1007 05:23:58.351812   12899 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:23:58.351836   12899 client.go:171] duration metric: took 234.211458ms to LocalClient.Create
	I1007 05:24:00.352095   12899 start.go:128] duration metric: took 2.295948833s to createHost
	I1007 05:24:00.352201   12899 start.go:83] releasing machines lock for "force-systemd-env-838000", held for 2.296455917s
	W1007 05:24:00.352510   12899 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-838000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:24:00.365155   12899 out.go:201] 
	W1007 05:24:00.371092   12899 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:24:00.371132   12899 out.go:270] * 
	* 
	W1007 05:24:00.374039   12899 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:24:00.380866   12899 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-838000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-838000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-838000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (91.931208ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-838000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-838000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-838000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-10-07 05:24:00.49102 -0700 PDT m=+717.777317168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-838000 -n force-systemd-env-838000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-838000 -n force-systemd-env-838000: exit status 7 (36.213958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-838000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-838000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-838000
--- FAIL: TestForceSystemdEnv (10.47s)

                                                
                                    
x
+
TestErrorSpam/setup (9.78s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-561000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-561000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 --driver=qemu2 : exit status 80 (9.776247292s)

                                                
                                                
-- stdout --
	* [nospam-561000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-561000" primary control-plane node in "nospam-561000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-561000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-561000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-561000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-561000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-561000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=18424
- KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-561000" primary control-plane node in "nospam-561000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-561000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-561000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.78s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.87s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-359000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-359000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.790736959s)

                                                
                                                
-- stdout --
	* [functional-359000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-359000" primary control-plane node in "functional-359000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-359000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52058 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52058 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52058 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-359000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2236: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-359000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2241: start stdout=* [functional-359000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=18424
- KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-359000" primary control-plane node in "functional-359000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-359000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2246: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:52058 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:52058 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:52058 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-359000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000: exit status 7 (76.448792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-359000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.87s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1007 05:13:40.205137   11284 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-359000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-359000 --alsologtostderr -v=8: exit status 80 (5.187927458s)

                                                
                                                
-- stdout --
	* [functional-359000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-359000" primary control-plane node in "functional-359000" cluster
	* Restarting existing qemu2 VM for "functional-359000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-359000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:13:40.239053   11498 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:13:40.239201   11498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:13:40.239204   11498 out.go:358] Setting ErrFile to fd 2...
	I1007 05:13:40.239207   11498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:13:40.239333   11498 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:13:40.240446   11498 out.go:352] Setting JSON to false
	I1007 05:13:40.258388   11498 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6191,"bootTime":1728297029,"procs":517,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:13:40.258458   11498 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:13:40.263581   11498 out.go:177] * [functional-359000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:13:40.270462   11498 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:13:40.270508   11498 notify.go:220] Checking for updates...
	I1007 05:13:40.277454   11498 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:13:40.280485   11498 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:13:40.283499   11498 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:13:40.285002   11498 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:13:40.288517   11498 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:13:40.291744   11498 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:13:40.291794   11498 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:13:40.296345   11498 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 05:13:40.303494   11498 start.go:297] selected driver: qemu2
	I1007 05:13:40.303500   11498 start.go:901] validating driver "qemu2" against &{Name:functional-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:functional-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:13:40.303545   11498 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:13:40.305985   11498 cni.go:84] Creating CNI manager for ""
	I1007 05:13:40.306029   11498 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:13:40.306079   11498 start.go:340] cluster config:
	{Name:functional-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-359000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:13:40.310648   11498 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:13:40.317474   11498 out.go:177] * Starting "functional-359000" primary control-plane node in "functional-359000" cluster
	I1007 05:13:40.321560   11498 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:13:40.321579   11498 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:13:40.321588   11498 cache.go:56] Caching tarball of preloaded images
	I1007 05:13:40.321681   11498 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:13:40.321687   11498 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:13:40.321764   11498 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/functional-359000/config.json ...
	I1007 05:13:40.322221   11498 start.go:360] acquireMachinesLock for functional-359000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:13:40.322256   11498 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "functional-359000"
	I1007 05:13:40.322266   11498 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:13:40.322271   11498 fix.go:54] fixHost starting: 
	I1007 05:13:40.322400   11498 fix.go:112] recreateIfNeeded on functional-359000: state=Stopped err=<nil>
	W1007 05:13:40.322412   11498 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:13:40.330478   11498 out.go:177] * Restarting existing qemu2 VM for "functional-359000" ...
	I1007 05:13:40.333465   11498 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:13:40.333506   11498 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:74:8c:b3:34:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/disk.qcow2
	I1007 05:13:40.335769   11498 main.go:141] libmachine: STDOUT: 
	I1007 05:13:40.335790   11498 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:13:40.335823   11498 fix.go:56] duration metric: took 13.549209ms for fixHost
	I1007 05:13:40.335828   11498 start.go:83] releasing machines lock for "functional-359000", held for 13.568208ms
	W1007 05:13:40.335834   11498 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:13:40.335892   11498 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:13:40.335897   11498 start.go:729] Will try again in 5 seconds ...
	I1007 05:13:45.338140   11498 start.go:360] acquireMachinesLock for functional-359000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:13:45.338530   11498 start.go:364] duration metric: took 292.791µs to acquireMachinesLock for "functional-359000"
	I1007 05:13:45.338659   11498 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:13:45.338688   11498 fix.go:54] fixHost starting: 
	I1007 05:13:45.339482   11498 fix.go:112] recreateIfNeeded on functional-359000: state=Stopped err=<nil>
	W1007 05:13:45.339512   11498 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:13:45.343093   11498 out.go:177] * Restarting existing qemu2 VM for "functional-359000" ...
	I1007 05:13:45.346725   11498 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:13:45.346942   11498 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:74:8c:b3:34:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/disk.qcow2
	I1007 05:13:45.357761   11498 main.go:141] libmachine: STDOUT: 
	I1007 05:13:45.357815   11498 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:13:45.357896   11498 fix.go:56] duration metric: took 19.214042ms for fixHost
	I1007 05:13:45.357918   11498 start.go:83] releasing machines lock for "functional-359000", held for 19.365917ms
	W1007 05:13:45.358075   11498 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-359000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-359000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:13:45.364947   11498 out.go:201] 
	W1007 05:13:45.368877   11498 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:13:45.368899   11498 out.go:270] * 
	* 
	W1007 05:13:45.371117   11498 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:13:45.379830   11498 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-359000 --alsologtostderr -v=8": exit status 80
functional_test.go:663: soft start took 5.189668375s for "functional-359000" cluster.
I1007 05:13:45.395001   11284 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000: exit status 7 (74.525333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-359000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (28.606583ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:687: expected current-context = "functional-359000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000: exit status 7 (35.38925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-359000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-359000 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-359000 get po -A: exit status 1 (26.740958ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-359000

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-359000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-359000\n"*: args "kubectl --context functional-359000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-359000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000: exit status 7 (35.435791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-359000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh sudo crictl images: exit status 83 (54.838041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test.go:1126: failed to get images by "out/minikube-darwin-arm64 -p functional-359000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1130: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (44.808542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-arm64 -p functional-359000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (46.623708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (46.740833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test.go:1165: expected "out/minikube-darwin-arm64 -p functional-359000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 kubectl -- --context functional-359000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 kubectl -- --context functional-359000 get pods: exit status 1 (738.406375ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-359000
	* no server found for cluster "functional-359000"

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-arm64 -p functional-359000 kubectl -- --context functional-359000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000: exit status 7 (36.490042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-359000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.78s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-359000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-359000 get pods: exit status 1 (1.215094917s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-359000
	* no server found for cluster "functional-359000"

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-359000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000: exit status 7 (33.875458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-359000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.25s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-359000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-359000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.188538708s)

                                                
                                                
-- stdout --
	* [functional-359000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-359000" primary control-plane node in "functional-359000" cluster
	* Restarting existing qemu2 VM for "functional-359000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-359000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-359000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-359000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:761: restart took 5.188884833s for "functional-359000" cluster.
I1007 05:13:56.213955   11284 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000: exit status 7 (74.78225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-359000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-359000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-359000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (28.678292ms)

                                                
                                                
** stderr ** 
	error: context "functional-359000" does not exist

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-359000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000: exit status 7 (34.424792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-359000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 logs
functional_test.go:1236: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 logs: exit status 83 (78.474292ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-839000 | jenkins | v1.34.0 | 07 Oct 24 05:12 PDT |                     |
	|         | -p download-only-839000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 07 Oct 24 05:12 PDT | 07 Oct 24 05:12 PDT |
	| delete  | -p download-only-839000                                                  | download-only-839000 | jenkins | v1.34.0 | 07 Oct 24 05:12 PDT | 07 Oct 24 05:12 PDT |
	| start   | -o=json --download-only                                                  | download-only-318000 | jenkins | v1.34.0 | 07 Oct 24 05:12 PDT |                     |
	|         | -p download-only-318000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
	| delete  | -p download-only-318000                                                  | download-only-318000 | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
	| delete  | -p download-only-839000                                                  | download-only-839000 | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
	| delete  | -p download-only-318000                                                  | download-only-318000 | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
	| start   | --download-only -p                                                       | binary-mirror-533000 | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
	|         | binary-mirror-533000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:52031                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-533000                                                  | binary-mirror-533000 | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
	| addons  | enable dashboard -p                                                      | addons-708000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
	|         | addons-708000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-708000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
	|         | addons-708000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-708000 --wait=true                                             | addons-708000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	| delete  | -p addons-708000                                                         | addons-708000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
	| start   | -p nospam-561000 -n=1 --memory=2250 --wait=false                         | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-561000                                                         | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
	| start   | -p functional-359000                                                     | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-359000                                                     | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-359000 cache add                                              | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-359000 cache add                                              | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-359000 cache add                                              | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-359000 cache add                                              | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
	|         | minikube-local-cache-test:functional-359000                              |                      |         |         |                     |                     |
	| cache   | functional-359000 cache delete                                           | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
	|         | minikube-local-cache-test:functional-359000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
	| ssh     | functional-359000 ssh sudo                                               | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-359000                                                        | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-359000 ssh                                                    | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-359000 cache reload                                           | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
	| ssh     | functional-359000 ssh                                                    | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-359000 kubectl --                                             | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
	|         | --context functional-359000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-359000                                                     | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 05:13:51
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 05:13:51.055753   11573 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:13:51.055901   11573 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:13:51.055903   11573 out.go:358] Setting ErrFile to fd 2...
	I1007 05:13:51.055905   11573 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:13:51.056036   11573 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:13:51.057284   11573 out.go:352] Setting JSON to false
	I1007 05:13:51.074965   11573 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6202,"bootTime":1728297029,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:13:51.075018   11573 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:13:51.081343   11573 out.go:177] * [functional-359000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:13:51.090251   11573 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:13:51.090309   11573 notify.go:220] Checking for updates...
	I1007 05:13:51.098227   11573 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:13:51.102248   11573 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:13:51.105245   11573 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:13:51.108295   11573 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:13:51.111296   11573 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:13:51.114515   11573 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:13:51.114555   11573 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:13:51.119184   11573 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 05:13:51.126193   11573 start.go:297] selected driver: qemu2
	I1007 05:13:51.126197   11573 start.go:901] validating driver "qemu2" against &{Name:functional-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:functional-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:13:51.126259   11573 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:13:51.128805   11573 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:13:51.128828   11573 cni.go:84] Creating CNI manager for ""
	I1007 05:13:51.128856   11573 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:13:51.128900   11573 start.go:340] cluster config:
	{Name:functional-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-359000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:13:51.133460   11573 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:13:51.140275   11573 out.go:177] * Starting "functional-359000" primary control-plane node in "functional-359000" cluster
	I1007 05:13:51.144245   11573 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:13:51.144259   11573 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:13:51.144272   11573 cache.go:56] Caching tarball of preloaded images
	I1007 05:13:51.144350   11573 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:13:51.144361   11573 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:13:51.144416   11573 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/functional-359000/config.json ...
	I1007 05:13:51.144834   11573 start.go:360] acquireMachinesLock for functional-359000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:13:51.144884   11573 start.go:364] duration metric: took 42.125µs to acquireMachinesLock for "functional-359000"
	I1007 05:13:51.144892   11573 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:13:51.144895   11573 fix.go:54] fixHost starting: 
	I1007 05:13:51.145013   11573 fix.go:112] recreateIfNeeded on functional-359000: state=Stopped err=<nil>
	W1007 05:13:51.145020   11573 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:13:51.152210   11573 out.go:177] * Restarting existing qemu2 VM for "functional-359000" ...
	I1007 05:13:51.156234   11573 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:13:51.156279   11573 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:74:8c:b3:34:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/disk.qcow2
	I1007 05:13:51.158579   11573 main.go:141] libmachine: STDOUT: 
	I1007 05:13:51.158596   11573 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:13:51.158629   11573 fix.go:56] duration metric: took 13.731583ms for fixHost
	I1007 05:13:51.158632   11573 start.go:83] releasing machines lock for "functional-359000", held for 13.744959ms
	W1007 05:13:51.158637   11573 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:13:51.158700   11573 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:13:51.158705   11573 start.go:729] Will try again in 5 seconds ...
	I1007 05:13:56.160851   11573 start.go:360] acquireMachinesLock for functional-359000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:13:56.161157   11573 start.go:364] duration metric: took 239.375µs to acquireMachinesLock for "functional-359000"
	I1007 05:13:56.161278   11573 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:13:56.161291   11573 fix.go:54] fixHost starting: 
	I1007 05:13:56.161905   11573 fix.go:112] recreateIfNeeded on functional-359000: state=Stopped err=<nil>
	W1007 05:13:56.161924   11573 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:13:56.166477   11573 out.go:177] * Restarting existing qemu2 VM for "functional-359000" ...
	I1007 05:13:56.170390   11573 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:13:56.170571   11573 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:74:8c:b3:34:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/disk.qcow2
	I1007 05:13:56.178546   11573 main.go:141] libmachine: STDOUT: 
	I1007 05:13:56.178601   11573 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:13:56.178673   11573 fix.go:56] duration metric: took 17.386125ms for fixHost
	I1007 05:13:56.178686   11573 start.go:83] releasing machines lock for "functional-359000", held for 17.513166ms
	W1007 05:13:56.178870   11573 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-359000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:13:56.186323   11573 out.go:201] 
	W1007 05:13:56.190404   11573 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:13:56.190418   11573 out.go:270] * 
	W1007 05:13:56.191843   11573 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:13:56.201362   11573 out.go:201] 
	
	
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test.go:1238: out/minikube-darwin-arm64 -p functional-359000 logs failed: exit status 83
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-839000 | jenkins | v1.34.0 | 07 Oct 24 05:12 PDT |                     |
|         | -p download-only-839000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 07 Oct 24 05:12 PDT | 07 Oct 24 05:12 PDT |
| delete  | -p download-only-839000                                                  | download-only-839000 | jenkins | v1.34.0 | 07 Oct 24 05:12 PDT | 07 Oct 24 05:12 PDT |
| start   | -o=json --download-only                                                  | download-only-318000 | jenkins | v1.34.0 | 07 Oct 24 05:12 PDT |                     |
|         | -p download-only-318000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
| delete  | -p download-only-318000                                                  | download-only-318000 | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
| delete  | -p download-only-839000                                                  | download-only-839000 | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
| delete  | -p download-only-318000                                                  | download-only-318000 | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
| start   | --download-only -p                                                       | binary-mirror-533000 | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | binary-mirror-533000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:52031                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-533000                                                  | binary-mirror-533000 | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
| addons  | enable dashboard -p                                                      | addons-708000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | addons-708000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-708000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | addons-708000                                                            |                      |         |         |                     |                     |
| start   | -p addons-708000 --wait=true                                             | addons-708000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-708000                                                         | addons-708000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
| start   | -p nospam-561000 -n=1 --memory=2250 --wait=false                         | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-561000                                                         | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
| start   | -p functional-359000                                                     | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-359000                                                     | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-359000 cache add                                              | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-359000 cache add                                              | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-359000 cache add                                              | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-359000 cache add                                              | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
|         | minikube-local-cache-test:functional-359000                              |                      |         |         |                     |                     |
| cache   | functional-359000 cache delete                                           | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
|         | minikube-local-cache-test:functional-359000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
| ssh     | functional-359000 ssh sudo                                               | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-359000                                                        | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-359000 ssh                                                    | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-359000 cache reload                                           | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
| ssh     | functional-359000 ssh                                                    | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-359000 kubectl --                                             | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | --context functional-359000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-359000                                                     | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/10/07 05:13:51
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.0 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1007 05:13:51.055753   11573 out.go:345] Setting OutFile to fd 1 ...
I1007 05:13:51.055901   11573 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 05:13:51.055903   11573 out.go:358] Setting ErrFile to fd 2...
I1007 05:13:51.055905   11573 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 05:13:51.056036   11573 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
I1007 05:13:51.057284   11573 out.go:352] Setting JSON to false
I1007 05:13:51.074965   11573 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6202,"bootTime":1728297029,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1007 05:13:51.075018   11573 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1007 05:13:51.081343   11573 out.go:177] * [functional-359000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1007 05:13:51.090251   11573 out.go:177]   - MINIKUBE_LOCATION=18424
I1007 05:13:51.090309   11573 notify.go:220] Checking for updates...
I1007 05:13:51.098227   11573 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
I1007 05:13:51.102248   11573 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1007 05:13:51.105245   11573 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1007 05:13:51.108295   11573 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
I1007 05:13:51.111296   11573 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1007 05:13:51.114515   11573 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 05:13:51.114555   11573 driver.go:394] Setting default libvirt URI to qemu:///system
I1007 05:13:51.119184   11573 out.go:177] * Using the qemu2 driver based on existing profile
I1007 05:13:51.126193   11573 start.go:297] selected driver: qemu2
I1007 05:13:51.126197   11573 start.go:901] validating driver "qemu2" against &{Name:functional-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1007 05:13:51.126259   11573 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1007 05:13:51.128805   11573 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1007 05:13:51.128828   11573 cni.go:84] Creating CNI manager for ""
I1007 05:13:51.128856   11573 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1007 05:13:51.128900   11573 start.go:340] cluster config:
{Name:functional-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-359000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1007 05:13:51.133460   11573 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1007 05:13:51.140275   11573 out.go:177] * Starting "functional-359000" primary control-plane node in "functional-359000" cluster
I1007 05:13:51.144245   11573 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1007 05:13:51.144259   11573 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I1007 05:13:51.144272   11573 cache.go:56] Caching tarball of preloaded images
I1007 05:13:51.144350   11573 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1007 05:13:51.144361   11573 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I1007 05:13:51.144416   11573 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/functional-359000/config.json ...
I1007 05:13:51.144834   11573 start.go:360] acquireMachinesLock for functional-359000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1007 05:13:51.144884   11573 start.go:364] duration metric: took 42.125µs to acquireMachinesLock for "functional-359000"
I1007 05:13:51.144892   11573 start.go:96] Skipping create...Using existing machine configuration
I1007 05:13:51.144895   11573 fix.go:54] fixHost starting: 
I1007 05:13:51.145013   11573 fix.go:112] recreateIfNeeded on functional-359000: state=Stopped err=<nil>
W1007 05:13:51.145020   11573 fix.go:138] unexpected machine state, will restart: <nil>
I1007 05:13:51.152210   11573 out.go:177] * Restarting existing qemu2 VM for "functional-359000" ...
I1007 05:13:51.156234   11573 qemu.go:418] Using hvf for hardware acceleration
I1007 05:13:51.156279   11573 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:74:8c:b3:34:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/disk.qcow2
I1007 05:13:51.158579   11573 main.go:141] libmachine: STDOUT: 
I1007 05:13:51.158596   11573 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1007 05:13:51.158629   11573 fix.go:56] duration metric: took 13.731583ms for fixHost
I1007 05:13:51.158632   11573 start.go:83] releasing machines lock for "functional-359000", held for 13.744959ms
W1007 05:13:51.158637   11573 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1007 05:13:51.158700   11573 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1007 05:13:51.158705   11573 start.go:729] Will try again in 5 seconds ...
I1007 05:13:56.160851   11573 start.go:360] acquireMachinesLock for functional-359000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1007 05:13:56.161157   11573 start.go:364] duration metric: took 239.375µs to acquireMachinesLock for "functional-359000"
I1007 05:13:56.161278   11573 start.go:96] Skipping create...Using existing machine configuration
I1007 05:13:56.161291   11573 fix.go:54] fixHost starting: 
I1007 05:13:56.161905   11573 fix.go:112] recreateIfNeeded on functional-359000: state=Stopped err=<nil>
W1007 05:13:56.161924   11573 fix.go:138] unexpected machine state, will restart: <nil>
I1007 05:13:56.166477   11573 out.go:177] * Restarting existing qemu2 VM for "functional-359000" ...
I1007 05:13:56.170390   11573 qemu.go:418] Using hvf for hardware acceleration
I1007 05:13:56.170571   11573 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:74:8c:b3:34:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/disk.qcow2
I1007 05:13:56.178546   11573 main.go:141] libmachine: STDOUT: 
I1007 05:13:56.178601   11573 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1007 05:13:56.178673   11573 fix.go:56] duration metric: took 17.386125ms for fixHost
I1007 05:13:56.178686   11573 start.go:83] releasing machines lock for "functional-359000", held for 17.513166ms
W1007 05:13:56.178870   11573 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-359000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1007 05:13:56.186323   11573 out.go:201] 
W1007 05:13:56.190404   11573 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1007 05:13:56.190418   11573 out.go:270] * 
W1007 05:13:56.191843   11573 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1007 05:13:56.201362   11573 out.go:201] 

                                                
                                                

                                                
                                                
* The control-plane node functional-359000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-359000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd2538586751/001/logs.txt
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-839000 | jenkins | v1.34.0 | 07 Oct 24 05:12 PDT |                     |
|         | -p download-only-839000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 07 Oct 24 05:12 PDT | 07 Oct 24 05:12 PDT |
| delete  | -p download-only-839000                                                  | download-only-839000 | jenkins | v1.34.0 | 07 Oct 24 05:12 PDT | 07 Oct 24 05:12 PDT |
| start   | -o=json --download-only                                                  | download-only-318000 | jenkins | v1.34.0 | 07 Oct 24 05:12 PDT |                     |
|         | -p download-only-318000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
| delete  | -p download-only-318000                                                  | download-only-318000 | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
| delete  | -p download-only-839000                                                  | download-only-839000 | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
| delete  | -p download-only-318000                                                  | download-only-318000 | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
| start   | --download-only -p                                                       | binary-mirror-533000 | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | binary-mirror-533000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:52031                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-533000                                                  | binary-mirror-533000 | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
| addons  | enable dashboard -p                                                      | addons-708000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | addons-708000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-708000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | addons-708000                                                            |                      |         |         |                     |                     |
| start   | -p addons-708000 --wait=true                                             | addons-708000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-708000                                                         | addons-708000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
| start   | -p nospam-561000 -n=1 --memory=2250 --wait=false                         | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-561000 --log_dir                                                  | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-561000                                                         | nospam-561000        | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
| start   | -p functional-359000                                                     | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-359000                                                     | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-359000 cache add                                              | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-359000 cache add                                              | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-359000 cache add                                              | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-359000 cache add                                              | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
|         | minikube-local-cache-test:functional-359000                              |                      |         |         |                     |                     |
| cache   | functional-359000 cache delete                                           | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
|         | minikube-local-cache-test:functional-359000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
| ssh     | functional-359000 ssh sudo                                               | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-359000                                                        | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-359000 ssh                                                    | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-359000 cache reload                                           | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
| ssh     | functional-359000 ssh                                                    | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT | 07 Oct 24 05:13 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-359000 kubectl --                                             | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | --context functional-359000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-359000                                                     | functional-359000    | jenkins | v1.34.0 | 07 Oct 24 05:13 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/10/07 05:13:51
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.0 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1007 05:13:51.055753   11573 out.go:345] Setting OutFile to fd 1 ...
I1007 05:13:51.055901   11573 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 05:13:51.055903   11573 out.go:358] Setting ErrFile to fd 2...
I1007 05:13:51.055905   11573 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 05:13:51.056036   11573 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
I1007 05:13:51.057284   11573 out.go:352] Setting JSON to false
I1007 05:13:51.074965   11573 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6202,"bootTime":1728297029,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1007 05:13:51.075018   11573 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1007 05:13:51.081343   11573 out.go:177] * [functional-359000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1007 05:13:51.090251   11573 out.go:177]   - MINIKUBE_LOCATION=18424
I1007 05:13:51.090309   11573 notify.go:220] Checking for updates...
I1007 05:13:51.098227   11573 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
I1007 05:13:51.102248   11573 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1007 05:13:51.105245   11573 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1007 05:13:51.108295   11573 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
I1007 05:13:51.111296   11573 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1007 05:13:51.114515   11573 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 05:13:51.114555   11573 driver.go:394] Setting default libvirt URI to qemu:///system
I1007 05:13:51.119184   11573 out.go:177] * Using the qemu2 driver based on existing profile
I1007 05:13:51.126193   11573 start.go:297] selected driver: qemu2
I1007 05:13:51.126197   11573 start.go:901] validating driver "qemu2" against &{Name:functional-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1007 05:13:51.126259   11573 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1007 05:13:51.128805   11573 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1007 05:13:51.128828   11573 cni.go:84] Creating CNI manager for ""
I1007 05:13:51.128856   11573 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1007 05:13:51.128900   11573 start.go:340] cluster config:
{Name:functional-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-359000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1007 05:13:51.133460   11573 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1007 05:13:51.140275   11573 out.go:177] * Starting "functional-359000" primary control-plane node in "functional-359000" cluster
I1007 05:13:51.144245   11573 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1007 05:13:51.144259   11573 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I1007 05:13:51.144272   11573 cache.go:56] Caching tarball of preloaded images
I1007 05:13:51.144350   11573 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1007 05:13:51.144361   11573 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I1007 05:13:51.144416   11573 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/functional-359000/config.json ...
I1007 05:13:51.144834   11573 start.go:360] acquireMachinesLock for functional-359000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1007 05:13:51.144884   11573 start.go:364] duration metric: took 42.125µs to acquireMachinesLock for "functional-359000"
I1007 05:13:51.144892   11573 start.go:96] Skipping create...Using existing machine configuration
I1007 05:13:51.144895   11573 fix.go:54] fixHost starting: 
I1007 05:13:51.145013   11573 fix.go:112] recreateIfNeeded on functional-359000: state=Stopped err=<nil>
W1007 05:13:51.145020   11573 fix.go:138] unexpected machine state, will restart: <nil>
I1007 05:13:51.152210   11573 out.go:177] * Restarting existing qemu2 VM for "functional-359000" ...
I1007 05:13:51.156234   11573 qemu.go:418] Using hvf for hardware acceleration
I1007 05:13:51.156279   11573 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:74:8c:b3:34:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/disk.qcow2
I1007 05:13:51.158579   11573 main.go:141] libmachine: STDOUT: 
I1007 05:13:51.158596   11573 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1007 05:13:51.158629   11573 fix.go:56] duration metric: took 13.731583ms for fixHost
I1007 05:13:51.158632   11573 start.go:83] releasing machines lock for "functional-359000", held for 13.744959ms
W1007 05:13:51.158637   11573 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1007 05:13:51.158700   11573 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1007 05:13:51.158705   11573 start.go:729] Will try again in 5 seconds ...
I1007 05:13:56.160851   11573 start.go:360] acquireMachinesLock for functional-359000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1007 05:13:56.161157   11573 start.go:364] duration metric: took 239.375µs to acquireMachinesLock for "functional-359000"
I1007 05:13:56.161278   11573 start.go:96] Skipping create...Using existing machine configuration
I1007 05:13:56.161291   11573 fix.go:54] fixHost starting: 
I1007 05:13:56.161905   11573 fix.go:112] recreateIfNeeded on functional-359000: state=Stopped err=<nil>
W1007 05:13:56.161924   11573 fix.go:138] unexpected machine state, will restart: <nil>
I1007 05:13:56.166477   11573 out.go:177] * Restarting existing qemu2 VM for "functional-359000" ...
I1007 05:13:56.170390   11573 qemu.go:418] Using hvf for hardware acceleration
I1007 05:13:56.170571   11573 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:74:8c:b3:34:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/functional-359000/disk.qcow2
I1007 05:13:56.178546   11573 main.go:141] libmachine: STDOUT: 
I1007 05:13:56.178601   11573 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1007 05:13:56.178673   11573 fix.go:56] duration metric: took 17.386125ms for fixHost
I1007 05:13:56.178686   11573 start.go:83] releasing machines lock for "functional-359000", held for 17.513166ms
W1007 05:13:56.178870   11573 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-359000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1007 05:13:56.186323   11573 out.go:201] 
W1007 05:13:56.190404   11573 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1007 05:13:56.190418   11573 out.go:270] * 
W1007 05:13:56.191843   11573 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1007 05:13:56.201362   11573 out.go:201] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-359000 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-359000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.876625ms)

                                                
                                                
** stderr ** 
	error: context "functional-359000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-359000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-359000 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-359000 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-359000 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-359000 --alsologtostderr -v=1] stderr:
I1007 05:14:38.477117   11887 out.go:345] Setting OutFile to fd 1 ...
I1007 05:14:38.477569   11887 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 05:14:38.477573   11887 out.go:358] Setting ErrFile to fd 2...
I1007 05:14:38.477576   11887 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 05:14:38.477730   11887 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
I1007 05:14:38.477958   11887 mustload.go:65] Loading cluster: functional-359000
I1007 05:14:38.478179   11887 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 05:14:38.481678   11887 out.go:177] * The control-plane node functional-359000 host is not running: state=Stopped
I1007 05:14:38.485714   11887 out.go:177]   To start a cluster, run: "minikube start -p functional-359000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000: exit status 7 (47.250416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-359000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 status: exit status 7 (34.404833ms)

                                                
                                                
-- stdout --
	functional-359000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-359000 status" : exit status 7
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (33.867916ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-359000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 status -o json: exit status 7 (34.87375ms)

                                                
                                                
-- stdout --
	{"Name":"functional-359000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-359000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000: exit status 7 (34.141625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-359000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-359000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1627: (dbg) Non-zero exit: kubectl --context functional-359000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.725416ms)

                                                
                                                
** stderr ** 
	error: context "functional-359000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-359000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-359000 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-359000 describe po hello-node-connect: exit status 1 (26.088125ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-359000

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-359000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-359000 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-359000 logs -l app=hello-node-connect: exit status 1 (26.183875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-359000

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-359000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-359000 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-359000 describe svc hello-node-connect: exit status 1 (26.343084ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-359000

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-359000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000: exit status 7 (35.81475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-359000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-359000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000: exit status 7 (33.876708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-359000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "echo hello"
functional_test.go:1725: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "echo hello": exit status 83 (50.927208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test.go:1730: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-359000 ssh \"echo hello\"" : exit status 83
functional_test.go:1734: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-359000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-359000\"\n"*. args "out/minikube-darwin-arm64 -p functional-359000 ssh \"echo hello\""
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "cat /etc/hostname": exit status 83 (52.924583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test.go:1748: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-359000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1752: expected minikube ssh command output to be -"functional-359000"- but got *"* The control-plane node functional-359000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-359000\"\n"*. args "out/minikube-darwin-arm64 -p functional-359000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000: exit status 7 (35.784459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-359000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (58.586833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-359000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh -n functional-359000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh -n functional-359000 "sudo cat /home/docker/cp-test.txt": exit status 83 (44.908291ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-359000 ssh -n functional-359000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-359000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-359000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 cp functional-359000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1517331418/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 cp functional-359000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1517331418/001/cp-test.txt: exit status 83 (45.66275ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-359000 cp functional-359000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1517331418/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh -n functional-359000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh -n functional-359000 "sudo cat /home/docker/cp-test.txt": exit status 83 (45.901292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-359000 ssh -n functional-359000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1517331418/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-359000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-359000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (46.005292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-359000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh -n functional-359000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh -n functional-359000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (48.737958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-359000 ssh -n functional-359000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-359000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-359000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/11284/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "sudo cat /etc/test/nested/copy/11284/hosts"
functional_test.go:1931: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "sudo cat /etc/test/nested/copy/11284/hosts": exit status 83 (53.264875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test.go:1933: out/minikube-darwin-arm64 -p functional-359000 ssh "sudo cat /etc/test/nested/copy/11284/hosts" failed: exit status 83
functional_test.go:1936: file sync test content: * The control-plane node functional-359000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-359000"
functional_test.go:1946: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-359000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-359000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000: exit status 7 (35.919959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-359000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/11284.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "sudo cat /etc/ssl/certs/11284.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "sudo cat /etc/ssl/certs/11284.pem": exit status 83 (44.8885ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/11284.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-359000 ssh \"sudo cat /etc/ssl/certs/11284.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/11284.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-359000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-359000"
	"""
)
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/11284.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "sudo cat /usr/share/ca-certificates/11284.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "sudo cat /usr/share/ca-certificates/11284.pem": exit status 83 (43.68275ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/usr/share/ca-certificates/11284.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-359000 ssh \"sudo cat /usr/share/ca-certificates/11284.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/11284.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-359000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-359000"
	"""
)
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (44.726708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-359000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-359000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-359000"
	"""
)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/112842.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "sudo cat /etc/ssl/certs/112842.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "sudo cat /etc/ssl/certs/112842.pem": exit status 83 (51.558458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/112842.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-359000 ssh \"sudo cat /etc/ssl/certs/112842.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/112842.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-359000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-359000"
	"""
)
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/112842.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "sudo cat /usr/share/ca-certificates/112842.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "sudo cat /usr/share/ca-certificates/112842.pem": exit status 83 (57.07325ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/usr/share/ca-certificates/112842.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-359000 ssh \"sudo cat /usr/share/ca-certificates/112842.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/112842.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-359000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-359000"
	"""
)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (59.5125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-359000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-359000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-359000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000: exit status 7 (36.179958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-359000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-359000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-359000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.550708ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-359000

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-359000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-359000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-359000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-359000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-359000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-359000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-359000 -n functional-359000: exit status 7 (35.524ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-359000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "sudo systemctl is-active crio": exit status 83 (44.319583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test.go:2030: output of 
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2033: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-359000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-359000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 version -o=json --components
functional_test.go:2270: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 version -o=json --components: exit status 83 (45.836958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test.go:2272: error version: exit status 83
functional_test.go:2277: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-359000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-359000"
functional_test.go:2277: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-359000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-359000"
functional_test.go:2277: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-359000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-359000"
functional_test.go:2277: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-359000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-359000"
functional_test.go:2277: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-359000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-359000"
functional_test.go:2277: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-359000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-359000"
functional_test.go:2277: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-359000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-359000"
functional_test.go:2277: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-359000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-359000"
functional_test.go:2277: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-359000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-359000"
functional_test.go:2277: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-359000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-359000"
--- FAIL: TestFunctional/parallel/Version/components (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-359000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-359000 image ls --format short --alsologtostderr:
I1007 05:14:38.914686   11902 out.go:345] Setting OutFile to fd 1 ...
I1007 05:14:38.914887   11902 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 05:14:38.914890   11902 out.go:358] Setting ErrFile to fd 2...
I1007 05:14:38.914893   11902 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 05:14:38.915028   11902 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
I1007 05:14:38.915468   11902 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 05:14:38.915529   11902 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-359000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-359000 image ls --format table --alsologtostderr:
I1007 05:14:39.165875   11914 out.go:345] Setting OutFile to fd 1 ...
I1007 05:14:39.166056   11914 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 05:14:39.166059   11914 out.go:358] Setting ErrFile to fd 2...
I1007 05:14:39.166062   11914 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 05:14:39.166206   11914 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
I1007 05:14:39.166664   11914 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 05:14:39.166721   11914 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
I1007 05:14:44.083109   11284 retry.go:31] will retry after 31.000765271s: Temporary Error: Get "http:": http: no Host in request URL
I1007 05:15:15.076748   11284 retry.go:31] will retry after 22.247845465s: Temporary Error: Get "http:": http: no Host in request URL
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-359000 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-359000 image ls --format json --alsologtostderr:
I1007 05:14:39.124882   11912 out.go:345] Setting OutFile to fd 1 ...
I1007 05:14:39.125067   11912 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 05:14:39.125070   11912 out.go:358] Setting ErrFile to fd 2...
I1007 05:14:39.125072   11912 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 05:14:39.125193   11912 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
I1007 05:14:39.125622   11912 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 05:14:39.125681   11912 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-359000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-359000 image ls --format yaml --alsologtostderr:
I1007 05:14:38.955630   11904 out.go:345] Setting OutFile to fd 1 ...
I1007 05:14:38.955809   11904 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 05:14:38.955813   11904 out.go:358] Setting ErrFile to fd 2...
I1007 05:14:38.955815   11904 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 05:14:38.955958   11904 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
I1007 05:14:38.956439   11904 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 05:14:38.956504   11904 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh pgrep buildkitd: exit status 83 (47.833542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 image build -t localhost/my-image:functional-359000 testdata/build --alsologtostderr
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-359000 image build -t localhost/my-image:functional-359000 testdata/build --alsologtostderr:
I1007 05:14:39.043404   11908 out.go:345] Setting OutFile to fd 1 ...
I1007 05:14:39.044176   11908 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 05:14:39.044179   11908 out.go:358] Setting ErrFile to fd 2...
I1007 05:14:39.044182   11908 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 05:14:39.044357   11908 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
I1007 05:14:39.044789   11908 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 05:14:39.045250   11908 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 05:14:39.045496   11908 build_images.go:133] succeeded building to: 
I1007 05:14:39.045500   11908 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 image ls
functional_test.go:446: expected "localhost/my-image:functional-359000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-359000 docker-env) && out/minikube-darwin-arm64 status -p functional-359000"
functional_test.go:499: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-359000 docker-env) && out/minikube-darwin-arm64 status -p functional-359000": exit status 1 (50.792458ms)
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 update-context --alsologtostderr -v=2: exit status 83 (47.762167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:14:38.773243   11896 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:14:38.774238   11896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:14:38.774241   11896 out.go:358] Setting ErrFile to fd 2...
	I1007 05:14:38.774249   11896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:14:38.774421   11896 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:14:38.774640   11896 mustload.go:65] Loading cluster: functional-359000
	I1007 05:14:38.774842   11896 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:14:38.778974   11896 out.go:177] * The control-plane node functional-359000 host is not running: state=Stopped
	I1007 05:14:38.782918   11896 out.go:177]   To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-359000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-359000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-359000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 update-context --alsologtostderr -v=2: exit status 83 (46.553958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:14:38.868051   11900 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:14:38.868223   11900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:14:38.868227   11900 out.go:358] Setting ErrFile to fd 2...
	I1007 05:14:38.868230   11900 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:14:38.868372   11900 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:14:38.868595   11900 mustload.go:65] Loading cluster: functional-359000
	I1007 05:14:38.868787   11900 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:14:38.872868   11900 out.go:177] * The control-plane node functional-359000 host is not running: state=Stopped
	I1007 05:14:38.876827   11900 out.go:177]   To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-359000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-359000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-359000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 update-context --alsologtostderr -v=2: exit status 83 (46.692584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:14:38.820752   11898 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:14:38.820926   11898 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:14:38.820929   11898 out.go:358] Setting ErrFile to fd 2...
	I1007 05:14:38.820932   11898 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:14:38.821079   11898 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:14:38.821313   11898 mustload.go:65] Loading cluster: functional-359000
	I1007 05:14:38.821521   11898 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:14:38.825710   11898 out.go:177] * The control-plane node functional-359000 host is not running: state=Stopped
	I1007 05:14:38.829899   11898 out.go:177]   To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-359000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-359000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-359000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 image load --daemon kicbase/echo-server:functional-359000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-359000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-359000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1437: (dbg) Non-zero exit: kubectl --context functional-359000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (28.521542ms)

                                                
                                                
** stderr ** 
	error: context "functional-359000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-359000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 service list: exit status 83 (48.373708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-darwin-arm64 -p functional-359000 service list" : exit status 83
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-359000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-359000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 service list -o json: exit status 83 (51.835667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-359000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 service --namespace=default --https --url hello-node: exit status 83 (53.894666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-darwin-arm64 -p functional-359000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 service hello-node --url --format={{.IP}}: exit status 83 (53.802375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-359000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1548: "* The control-plane node functional-359000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-359000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 image load --daemon kicbase/echo-server:functional-359000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-359000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 service hello-node --url: exit status 83 (52.857084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-359000 service hello-node --url": exit status 83
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-359000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-359000"
functional_test.go:1569: failed to parse "* The control-plane node functional-359000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-359000\"": parse "* The control-plane node functional-359000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-359000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-359000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 image load --daemon kicbase/echo-server:functional-359000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-359000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-359000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-359000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I1007 05:13:59.302552   11704 out.go:345] Setting OutFile to fd 1 ...
I1007 05:13:59.302697   11704 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 05:13:59.302699   11704 out.go:358] Setting ErrFile to fd 2...
I1007 05:13:59.302702   11704 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 05:13:59.302831   11704 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
I1007 05:13:59.303061   11704 mustload.go:65] Loading cluster: functional-359000
I1007 05:13:59.303281   11704 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 05:13:59.308152   11704 out.go:177] * The control-plane node functional-359000 host is not running: state=Stopped
I1007 05:13:59.316087   11704 out.go:177]   To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
stdout: * The control-plane node functional-359000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-359000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-359000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-359000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-359000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-359000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 11703: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-359000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-359000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-359000": client config: context "functional-359000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (98.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1007 05:13:59.389429   11284 retry.go:31] will retry after 4.408534269s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-359000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-359000 get svc nginx-svc: exit status 1 (69.957375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-359000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-359000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (98.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 image save kicbase/echo-server:functional-359000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-359000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1007 05:15:37.412231   11284 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.035920666s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 10 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1007 05:16:02.556342   11284 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 05:16:12.558390   11284 retry.go:31] will retry after 2.916793886s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1007 05:16:25.479421   11284 retry.go:31] will retry after 4.394306845s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": dial tcp: lookup nginx-svc.default.svc.cluster.local. on 8.8.8.8:53: read udp 207.254.73.72:56837->10.96.0.10:53: i/o timeout
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-061000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-061000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.895411625s)

                                                
                                                
-- stdout --
	* [ha-061000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-061000" primary control-plane node in "ha-061000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-061000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:16:32.961665   11953 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:16:32.961819   11953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:16:32.961823   11953 out.go:358] Setting ErrFile to fd 2...
	I1007 05:16:32.961825   11953 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:16:32.961957   11953 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:16:32.963261   11953 out.go:352] Setting JSON to false
	I1007 05:16:32.981053   11953 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6363,"bootTime":1728297029,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:16:32.981135   11953 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:16:32.986973   11953 out.go:177] * [ha-061000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:16:32.995389   11953 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:16:32.995440   11953 notify.go:220] Checking for updates...
	I1007 05:16:33.002345   11953 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:16:33.008898   11953 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:16:33.011891   11953 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:16:33.014982   11953 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:16:33.018820   11953 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:16:33.023090   11953 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:16:33.029007   11953 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:16:33.038966   11953 start.go:297] selected driver: qemu2
	I1007 05:16:33.038973   11953 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:16:33.038980   11953 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:16:33.041667   11953 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:16:33.048890   11953 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:16:33.053013   11953 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:16:33.053038   11953 cni.go:84] Creating CNI manager for ""
	I1007 05:16:33.053065   11953 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1007 05:16:33.053074   11953 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 05:16:33.053127   11953 start.go:340] cluster config:
	{Name:ha-061000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-061000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client So
cketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:16:33.058664   11953 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:16:33.066935   11953 out.go:177] * Starting "ha-061000" primary control-plane node in "ha-061000" cluster
	I1007 05:16:33.070956   11953 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:16:33.070981   11953 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:16:33.071000   11953 cache.go:56] Caching tarball of preloaded images
	I1007 05:16:33.071092   11953 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:16:33.071101   11953 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:16:33.071373   11953 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/ha-061000/config.json ...
	I1007 05:16:33.071389   11953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/ha-061000/config.json: {Name:mkfc754829f114aff5618089fc25d0ab769ba3e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:16:33.071740   11953 start.go:360] acquireMachinesLock for ha-061000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:16:33.071797   11953 start.go:364] duration metric: took 50.25µs to acquireMachinesLock for "ha-061000"
	I1007 05:16:33.071813   11953 start.go:93] Provisioning new machine with config: &{Name:ha-061000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-061000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:16:33.071868   11953 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:16:33.076947   11953 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:16:33.096745   11953 start.go:159] libmachine.API.Create for "ha-061000" (driver="qemu2")
	I1007 05:16:33.096773   11953 client.go:168] LocalClient.Create starting
	I1007 05:16:33.096846   11953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:16:33.096891   11953 main.go:141] libmachine: Decoding PEM data...
	I1007 05:16:33.096909   11953 main.go:141] libmachine: Parsing certificate...
	I1007 05:16:33.096953   11953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:16:33.096997   11953 main.go:141] libmachine: Decoding PEM data...
	I1007 05:16:33.097008   11953 main.go:141] libmachine: Parsing certificate...
	I1007 05:16:33.097457   11953 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:16:33.239714   11953 main.go:141] libmachine: Creating SSH key...
	I1007 05:16:33.324822   11953 main.go:141] libmachine: Creating Disk image...
	I1007 05:16:33.324834   11953 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:16:33.325037   11953 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/disk.qcow2
	I1007 05:16:33.334773   11953 main.go:141] libmachine: STDOUT: 
	I1007 05:16:33.334793   11953 main.go:141] libmachine: STDERR: 
	I1007 05:16:33.334850   11953 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/disk.qcow2 +20000M
	I1007 05:16:33.343293   11953 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:16:33.343310   11953 main.go:141] libmachine: STDERR: 
	I1007 05:16:33.343329   11953 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/disk.qcow2
	I1007 05:16:33.343334   11953 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:16:33.343344   11953 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:16:33.343367   11953 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:cb:01:67:01:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/disk.qcow2
	I1007 05:16:33.345173   11953 main.go:141] libmachine: STDOUT: 
	I1007 05:16:33.345190   11953 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:16:33.345208   11953 client.go:171] duration metric: took 248.434625ms to LocalClient.Create
	I1007 05:16:35.346825   11953 start.go:128] duration metric: took 2.274976834s to createHost
	I1007 05:16:35.346888   11953 start.go:83] releasing machines lock for "ha-061000", held for 2.2751245s
	W1007 05:16:35.346962   11953 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:16:35.359970   11953 out.go:177] * Deleting "ha-061000" in qemu2 ...
	W1007 05:16:35.384040   11953 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:16:35.384071   11953 start.go:729] Will try again in 5 seconds ...
	I1007 05:16:40.386180   11953 start.go:360] acquireMachinesLock for ha-061000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:16:40.386739   11953 start.go:364] duration metric: took 408.458µs to acquireMachinesLock for "ha-061000"
	I1007 05:16:40.386904   11953 start.go:93] Provisioning new machine with config: &{Name:ha-061000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.31.1 ClusterName:ha-061000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:16:40.387144   11953 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:16:40.397259   11953 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:16:40.447399   11953 start.go:159] libmachine.API.Create for "ha-061000" (driver="qemu2")
	I1007 05:16:40.447444   11953 client.go:168] LocalClient.Create starting
	I1007 05:16:40.447571   11953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:16:40.447673   11953 main.go:141] libmachine: Decoding PEM data...
	I1007 05:16:40.447695   11953 main.go:141] libmachine: Parsing certificate...
	I1007 05:16:40.447776   11953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:16:40.447835   11953 main.go:141] libmachine: Decoding PEM data...
	I1007 05:16:40.447853   11953 main.go:141] libmachine: Parsing certificate...
	I1007 05:16:40.448503   11953 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:16:40.605288   11953 main.go:141] libmachine: Creating SSH key...
	I1007 05:16:40.759140   11953 main.go:141] libmachine: Creating Disk image...
	I1007 05:16:40.759147   11953 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:16:40.759361   11953 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/disk.qcow2
	I1007 05:16:40.769686   11953 main.go:141] libmachine: STDOUT: 
	I1007 05:16:40.769709   11953 main.go:141] libmachine: STDERR: 
	I1007 05:16:40.769762   11953 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/disk.qcow2 +20000M
	I1007 05:16:40.778152   11953 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:16:40.778169   11953 main.go:141] libmachine: STDERR: 
	I1007 05:16:40.778181   11953 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/disk.qcow2
	I1007 05:16:40.778185   11953 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:16:40.778199   11953 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:16:40.778241   11953 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:8d:76:af:18:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/disk.qcow2
	I1007 05:16:40.780046   11953 main.go:141] libmachine: STDOUT: 
	I1007 05:16:40.780063   11953 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:16:40.780078   11953 client.go:171] duration metric: took 332.635458ms to LocalClient.Create
	I1007 05:16:42.782253   11953 start.go:128] duration metric: took 2.395092542s to createHost
	I1007 05:16:42.782488   11953 start.go:83] releasing machines lock for "ha-061000", held for 2.39561725s
	W1007 05:16:42.782818   11953 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-061000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-061000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:16:42.795351   11953 out.go:201] 
	W1007 05:16:42.799838   11953 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:16:42.799892   11953 out.go:270] * 
	* 
	W1007 05:16:42.802897   11953 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:16:42.812542   11953 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-061000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000: exit status 7 (77.107666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (90.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-061000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-061000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (63.790458ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-061000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-061000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-061000 -- rollout status deployment/busybox: exit status 1 (63.020792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-061000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-061000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-061000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (61.806583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-061000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 05:16:43.092960   11284 retry.go:31] will retry after 1.483293904s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-061000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-061000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.067083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-061000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 05:16:44.688606   11284 retry.go:31] will retry after 1.434904905s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-061000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-061000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.038375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-061000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 05:16:46.234858   11284 retry.go:31] will retry after 2.056982628s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-061000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-061000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (111.841542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-061000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 05:16:48.406024   11284 retry.go:31] will retry after 4.740069194s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-061000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-061000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.545416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-061000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 05:16:53.259049   11284 retry.go:31] will retry after 4.691079485s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-061000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-061000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.211667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-061000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 05:16:58.061818   11284 retry.go:31] will retry after 7.822325142s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-061000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-061000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.432208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-061000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 05:17:05.996254   11284 retry.go:31] will retry after 10.924081603s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-061000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-061000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.566416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-061000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 05:17:17.032238   11284 retry.go:31] will retry after 24.665099757s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-061000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-061000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.918041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-061000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 05:17:41.806850   11284 retry.go:31] will retry after 30.968263199s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-061000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-061000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.071458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-061000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-061000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-061000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (63.088958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-061000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-061000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-061000 -- exec  -- nslookup kubernetes.io: exit status 1 (62.389917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-061000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-061000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-061000 -- exec  -- nslookup kubernetes.default: exit status 1 (62.473625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-061000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-061000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-061000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (62.781417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-061000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000: exit status 7 (35.040708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (90.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-061000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-061000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.438125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-061000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000: exit status 7 (34.731583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-061000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-061000 -v=7 --alsologtostderr: exit status 83 (47.188083ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-061000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-061000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:18:13.303349   12050 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:18:13.303928   12050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:13.303932   12050 out.go:358] Setting ErrFile to fd 2...
	I1007 05:18:13.303934   12050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:13.304066   12050 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:18:13.304289   12050 mustload.go:65] Loading cluster: ha-061000
	I1007 05:18:13.304494   12050 config.go:182] Loaded profile config "ha-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:18:13.308610   12050 out.go:177] * The control-plane node ha-061000 host is not running: state=Stopped
	I1007 05:18:13.313424   12050 out.go:177]   To start a cluster, run: "minikube start -p ha-061000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-061000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000: exit status 7 (34.884625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-061000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-061000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.689667ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-061000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-061000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-061000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000: exit status 7 (35.509666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-061000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-061000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-061000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServer
Port\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-061000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"Container
Runtime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SS
HAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-061000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-061000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-061000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-061000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000: exit status 7 (34.184ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 -p ha-061000 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-061000 status --output json -v=7 --alsologtostderr: exit status 7 (34.985584ms)

                                                
                                                
-- stdout --
	{"Name":"ha-061000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:18:13.536743   12062 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:18:13.536926   12062 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:13.536929   12062 out.go:358] Setting ErrFile to fd 2...
	I1007 05:18:13.536932   12062 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:13.537061   12062 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:18:13.537181   12062 out.go:352] Setting JSON to true
	I1007 05:18:13.537192   12062 mustload.go:65] Loading cluster: ha-061000
	I1007 05:18:13.537251   12062 notify.go:220] Checking for updates...
	I1007 05:18:13.537391   12062 config.go:182] Loaded profile config "ha-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:18:13.537400   12062 status.go:174] checking status of ha-061000 ...
	I1007 05:18:13.537660   12062 status.go:371] ha-061000 host status = "Stopped" (err=<nil>)
	I1007 05:18:13.537664   12062 status.go:384] host is not running, skipping remaining checks
	I1007 05:18:13.537666   12062 status.go:176] ha-061000 status: &{Name:ha-061000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:335: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-061000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000: exit status 7 (34.669041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p ha-061000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-061000 node stop m02 -v=7 --alsologtostderr: exit status 85 (53.060334ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:18:13.607171   12066 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:18:13.607816   12066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:13.607823   12066 out.go:358] Setting ErrFile to fd 2...
	I1007 05:18:13.607826   12066 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:13.607984   12066 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:18:13.608243   12066 mustload.go:65] Loading cluster: ha-061000
	I1007 05:18:13.608456   12066 config.go:182] Loaded profile config "ha-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:18:13.612211   12066 out.go:201] 
	W1007 05:18:13.616059   12066 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1007 05:18:13.616065   12066 out.go:270] * 
	* 
	W1007 05:18:13.618242   12066 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:18:13.622043   12066 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-061000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr: exit status 7 (34.998291ms)

                                                
                                                
-- stdout --
	ha-061000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:18:13.660312   12068 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:18:13.660511   12068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:13.660515   12068 out.go:358] Setting ErrFile to fd 2...
	I1007 05:18:13.660517   12068 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:13.660645   12068 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:18:13.660765   12068 out.go:352] Setting JSON to false
	I1007 05:18:13.660776   12068 mustload.go:65] Loading cluster: ha-061000
	I1007 05:18:13.660820   12068 notify.go:220] Checking for updates...
	I1007 05:18:13.660980   12068 config.go:182] Loaded profile config "ha-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:18:13.660992   12068 status.go:174] checking status of ha-061000 ...
	I1007 05:18:13.661248   12068 status.go:371] ha-061000 host status = "Stopped" (err=<nil>)
	I1007 05:18:13.661252   12068 status.go:384] host is not running, skipping remaining checks
	I1007 05:18:13.661254   12068 status.go:176] ha-061000 status: &{Name:ha-061000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr": ha-061000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr": ha-061000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr": ha-061000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr": ha-061000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000: exit status 7 (34.686208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-061000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-061000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-061000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-061000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.3
1.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAut
hSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000: exit status 7 (34.770583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (42.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p ha-061000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-061000 node start m02 -v=7 --alsologtostderr: exit status 85 (53.748166ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:18:13.818196   12077 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:18:13.818686   12077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:13.818690   12077 out.go:358] Setting ErrFile to fd 2...
	I1007 05:18:13.818692   12077 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:13.818847   12077 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:18:13.819088   12077 mustload.go:65] Loading cluster: ha-061000
	I1007 05:18:13.819284   12077 config.go:182] Loaded profile config "ha-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:18:13.824075   12077 out.go:201] 
	W1007 05:18:13.828051   12077 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1007 05:18:13.828055   12077 out.go:270] * 
	* 
	W1007 05:18:13.830037   12077 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:18:13.833904   12077 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1007 05:18:13.818196   12077 out.go:345] Setting OutFile to fd 1 ...
I1007 05:18:13.818686   12077 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 05:18:13.818690   12077 out.go:358] Setting ErrFile to fd 2...
I1007 05:18:13.818692   12077 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 05:18:13.818847   12077 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
I1007 05:18:13.819088   12077 mustload.go:65] Loading cluster: ha-061000
I1007 05:18:13.819284   12077 config.go:182] Loaded profile config "ha-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 05:18:13.824075   12077 out.go:201] 
W1007 05:18:13.828051   12077 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1007 05:18:13.828055   12077 out.go:270] * 
* 
W1007 05:18:13.830037   12077 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1007 05:18:13.833904   12077 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-061000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr: exit status 7 (35.656959ms)

                                                
                                                
-- stdout --
	ha-061000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:18:13.872789   12079 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:18:13.872992   12079 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:13.872995   12079 out.go:358] Setting ErrFile to fd 2...
	I1007 05:18:13.872997   12079 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:13.873147   12079 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:18:13.873274   12079 out.go:352] Setting JSON to false
	I1007 05:18:13.873285   12079 mustload.go:65] Loading cluster: ha-061000
	I1007 05:18:13.873341   12079 notify.go:220] Checking for updates...
	I1007 05:18:13.873504   12079 config.go:182] Loaded profile config "ha-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:18:13.873511   12079 status.go:174] checking status of ha-061000 ...
	I1007 05:18:13.873742   12079 status.go:371] ha-061000 host status = "Stopped" (err=<nil>)
	I1007 05:18:13.873745   12079 status.go:384] host is not running, skipping remaining checks
	I1007 05:18:13.873747   12079 status.go:176] ha-061000 status: &{Name:ha-061000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 05:18:13.874665   11284 retry.go:31] will retry after 1.395412486s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr: exit status 7 (80.184875ms)

                                                
                                                
-- stdout --
	ha-061000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:18:15.350429   12081 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:18:15.350678   12081 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:15.350682   12081 out.go:358] Setting ErrFile to fd 2...
	I1007 05:18:15.350685   12081 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:15.350838   12081 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:18:15.350997   12081 out.go:352] Setting JSON to false
	I1007 05:18:15.351011   12081 mustload.go:65] Loading cluster: ha-061000
	I1007 05:18:15.351048   12081 notify.go:220] Checking for updates...
	I1007 05:18:15.351309   12081 config.go:182] Loaded profile config "ha-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:18:15.351318   12081 status.go:174] checking status of ha-061000 ...
	I1007 05:18:15.351606   12081 status.go:371] ha-061000 host status = "Stopped" (err=<nil>)
	I1007 05:18:15.351611   12081 status.go:384] host is not running, skipping remaining checks
	I1007 05:18:15.351613   12081 status.go:176] ha-061000 status: &{Name:ha-061000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 05:18:15.352602   11284 retry.go:31] will retry after 1.485733552s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr: exit status 7 (80.706417ms)

                                                
                                                
-- stdout --
	ha-061000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:18:16.919228   12084 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:18:16.919445   12084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:16.919449   12084 out.go:358] Setting ErrFile to fd 2...
	I1007 05:18:16.919451   12084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:16.919594   12084 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:18:16.919740   12084 out.go:352] Setting JSON to false
	I1007 05:18:16.919754   12084 mustload.go:65] Loading cluster: ha-061000
	I1007 05:18:16.919799   12084 notify.go:220] Checking for updates...
	I1007 05:18:16.920014   12084 config.go:182] Loaded profile config "ha-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:18:16.920024   12084 status.go:174] checking status of ha-061000 ...
	I1007 05:18:16.920335   12084 status.go:371] ha-061000 host status = "Stopped" (err=<nil>)
	I1007 05:18:16.920339   12084 status.go:384] host is not running, skipping remaining checks
	I1007 05:18:16.920342   12084 status.go:176] ha-061000 status: &{Name:ha-061000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 05:18:16.921396   11284 retry.go:31] will retry after 2.60181196s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr: exit status 7 (79.568875ms)

                                                
                                                
-- stdout --
	ha-061000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:18:19.602890   12086 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:18:19.603108   12086 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:19.603112   12086 out.go:358] Setting ErrFile to fd 2...
	I1007 05:18:19.603115   12086 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:19.603287   12086 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:18:19.603438   12086 out.go:352] Setting JSON to false
	I1007 05:18:19.603451   12086 mustload.go:65] Loading cluster: ha-061000
	I1007 05:18:19.603488   12086 notify.go:220] Checking for updates...
	I1007 05:18:19.603710   12086 config.go:182] Loaded profile config "ha-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:18:19.603720   12086 status.go:174] checking status of ha-061000 ...
	I1007 05:18:19.604033   12086 status.go:371] ha-061000 host status = "Stopped" (err=<nil>)
	I1007 05:18:19.604038   12086 status.go:384] host is not running, skipping remaining checks
	I1007 05:18:19.604040   12086 status.go:176] ha-061000 status: &{Name:ha-061000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 05:18:19.605061   11284 retry.go:31] will retry after 1.984746014s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr: exit status 7 (80.151625ms)

                                                
                                                
-- stdout --
	ha-061000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:18:21.670037   12088 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:18:21.670256   12088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:21.670260   12088 out.go:358] Setting ErrFile to fd 2...
	I1007 05:18:21.670263   12088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:21.670418   12088 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:18:21.670586   12088 out.go:352] Setting JSON to false
	I1007 05:18:21.670598   12088 mustload.go:65] Loading cluster: ha-061000
	I1007 05:18:21.670635   12088 notify.go:220] Checking for updates...
	I1007 05:18:21.670858   12088 config.go:182] Loaded profile config "ha-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:18:21.670869   12088 status.go:174] checking status of ha-061000 ...
	I1007 05:18:21.671169   12088 status.go:371] ha-061000 host status = "Stopped" (err=<nil>)
	I1007 05:18:21.671173   12088 status.go:384] host is not running, skipping remaining checks
	I1007 05:18:21.671176   12088 status.go:176] ha-061000 status: &{Name:ha-061000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 05:18:21.672247   11284 retry.go:31] will retry after 3.465623192s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr: exit status 7 (77.926ms)

                                                
                                                
-- stdout --
	ha-061000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:18:25.216565   12090 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:18:25.216781   12090 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:25.216786   12090 out.go:358] Setting ErrFile to fd 2...
	I1007 05:18:25.216789   12090 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:25.216961   12090 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:18:25.217119   12090 out.go:352] Setting JSON to false
	I1007 05:18:25.217132   12090 mustload.go:65] Loading cluster: ha-061000
	I1007 05:18:25.217175   12090 notify.go:220] Checking for updates...
	I1007 05:18:25.217397   12090 config.go:182] Loaded profile config "ha-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:18:25.217406   12090 status.go:174] checking status of ha-061000 ...
	I1007 05:18:25.217711   12090 status.go:371] ha-061000 host status = "Stopped" (err=<nil>)
	I1007 05:18:25.217715   12090 status.go:384] host is not running, skipping remaining checks
	I1007 05:18:25.217717   12090 status.go:176] ha-061000 status: &{Name:ha-061000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 05:18:25.218774   11284 retry.go:31] will retry after 7.656678306s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr: exit status 7 (76.842042ms)

                                                
                                                
-- stdout --
	ha-061000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:18:32.952317   12094 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:18:32.952537   12094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:32.952541   12094 out.go:358] Setting ErrFile to fd 2...
	I1007 05:18:32.952544   12094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:32.952732   12094 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:18:32.952891   12094 out.go:352] Setting JSON to false
	I1007 05:18:32.952905   12094 mustload.go:65] Loading cluster: ha-061000
	I1007 05:18:32.952955   12094 notify.go:220] Checking for updates...
	I1007 05:18:32.953224   12094 config.go:182] Loaded profile config "ha-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:18:32.953234   12094 status.go:174] checking status of ha-061000 ...
	I1007 05:18:32.953567   12094 status.go:371] ha-061000 host status = "Stopped" (err=<nil>)
	I1007 05:18:32.953572   12094 status.go:384] host is not running, skipping remaining checks
	I1007 05:18:32.953575   12094 status.go:176] ha-061000 status: &{Name:ha-061000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 05:18:32.954616   11284 retry.go:31] will retry after 7.631745273s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr: exit status 7 (79.211083ms)

                                                
                                                
-- stdout --
	ha-061000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:18:40.665760   12096 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:18:40.665971   12096 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:40.665974   12096 out.go:358] Setting ErrFile to fd 2...
	I1007 05:18:40.665977   12096 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:40.666163   12096 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:18:40.666322   12096 out.go:352] Setting JSON to false
	I1007 05:18:40.666335   12096 mustload.go:65] Loading cluster: ha-061000
	I1007 05:18:40.666359   12096 notify.go:220] Checking for updates...
	I1007 05:18:40.666583   12096 config.go:182] Loaded profile config "ha-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:18:40.666592   12096 status.go:174] checking status of ha-061000 ...
	I1007 05:18:40.666898   12096 status.go:371] ha-061000 host status = "Stopped" (err=<nil>)
	I1007 05:18:40.666903   12096 status.go:384] host is not running, skipping remaining checks
	I1007 05:18:40.666905   12096 status.go:176] ha-061000 status: &{Name:ha-061000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 05:18:40.667912   11284 retry.go:31] will retry after 15.874164327s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr: exit status 7 (80.625167ms)

                                                
                                                
-- stdout --
	ha-061000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:18:56.622853   12102 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:18:56.623073   12102 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:56.623077   12102 out.go:358] Setting ErrFile to fd 2...
	I1007 05:18:56.623080   12102 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:56.623240   12102 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:18:56.623386   12102 out.go:352] Setting JSON to false
	I1007 05:18:56.623400   12102 mustload.go:65] Loading cluster: ha-061000
	I1007 05:18:56.623434   12102 notify.go:220] Checking for updates...
	I1007 05:18:56.623678   12102 config.go:182] Loaded profile config "ha-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:18:56.623687   12102 status.go:174] checking status of ha-061000 ...
	I1007 05:18:56.623990   12102 status.go:371] ha-061000 host status = "Stopped" (err=<nil>)
	I1007 05:18:56.623994   12102 status.go:384] host is not running, skipping remaining checks
	I1007 05:18:56.623997   12102 status.go:176] ha-061000 status: &{Name:ha-061000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000: exit status 7 (36.077291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (42.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-061000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-061000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-061000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServer
Port\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-061000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"Container
Runtime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SS
HAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-061000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-061000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-061000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-061000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000: exit status 7 (34.349583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-061000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-061000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-061000 -v=7 --alsologtostderr: (1.86975275s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-061000 --wait=true -v=7 --alsologtostderr
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-061000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.231346208s)

                                                
                                                
-- stdout --
	* [ha-061000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-061000" primary control-plane node in "ha-061000" cluster
	* Restarting existing qemu2 VM for "ha-061000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-061000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:18:58.721781   12123 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:18:58.721971   12123 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:58.721975   12123 out.go:358] Setting ErrFile to fd 2...
	I1007 05:18:58.721977   12123 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:18:58.722151   12123 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:18:58.723408   12123 out.go:352] Setting JSON to false
	I1007 05:18:58.743045   12123 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6509,"bootTime":1728297029,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:18:58.743115   12123 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:18:58.748433   12123 out.go:177] * [ha-061000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:18:58.755186   12123 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:18:58.755223   12123 notify.go:220] Checking for updates...
	I1007 05:18:58.762424   12123 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:18:58.763813   12123 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:18:58.767387   12123 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:18:58.770406   12123 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:18:58.773395   12123 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:18:58.776649   12123 config.go:182] Loaded profile config "ha-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:18:58.776698   12123 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:18:58.781356   12123 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 05:18:58.788357   12123 start.go:297] selected driver: qemu2
	I1007 05:18:58.788363   12123 start.go:901] validating driver "qemu2" against &{Name:ha-061000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.1 ClusterName:ha-061000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:18:58.788431   12123 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:18:58.790918   12123 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:18:58.790941   12123 cni.go:84] Creating CNI manager for ""
	I1007 05:18:58.790967   12123 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 05:18:58.791009   12123 start.go:340] cluster config:
	{Name:ha-061000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-061000 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:18:58.795570   12123 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:18:58.803308   12123 out.go:177] * Starting "ha-061000" primary control-plane node in "ha-061000" cluster
	I1007 05:18:58.807367   12123 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:18:58.807385   12123 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:18:58.807395   12123 cache.go:56] Caching tarball of preloaded images
	I1007 05:18:58.807478   12123 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:18:58.807483   12123 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:18:58.807547   12123 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/ha-061000/config.json ...
	I1007 05:18:58.807991   12123 start.go:360] acquireMachinesLock for ha-061000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:18:58.808050   12123 start.go:364] duration metric: took 52.666µs to acquireMachinesLock for "ha-061000"
	I1007 05:18:58.808060   12123 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:18:58.808065   12123 fix.go:54] fixHost starting: 
	I1007 05:18:58.808195   12123 fix.go:112] recreateIfNeeded on ha-061000: state=Stopped err=<nil>
	W1007 05:18:58.808205   12123 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:18:58.815435   12123 out.go:177] * Restarting existing qemu2 VM for "ha-061000" ...
	I1007 05:18:58.819428   12123 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:18:58.819467   12123 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:8d:76:af:18:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/disk.qcow2
	I1007 05:18:58.821790   12123 main.go:141] libmachine: STDOUT: 
	I1007 05:18:58.821812   12123 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:18:58.821843   12123 fix.go:56] duration metric: took 13.775417ms for fixHost
	I1007 05:18:58.821849   12123 start.go:83] releasing machines lock for "ha-061000", held for 13.793834ms
	W1007 05:18:58.821855   12123 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:18:58.821907   12123 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:18:58.821912   12123 start.go:729] Will try again in 5 seconds ...
	I1007 05:19:03.823991   12123 start.go:360] acquireMachinesLock for ha-061000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:19:03.824353   12123 start.go:364] duration metric: took 297.875µs to acquireMachinesLock for "ha-061000"
	I1007 05:19:03.824465   12123 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:19:03.824481   12123 fix.go:54] fixHost starting: 
	I1007 05:19:03.825110   12123 fix.go:112] recreateIfNeeded on ha-061000: state=Stopped err=<nil>
	W1007 05:19:03.825138   12123 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:19:03.833480   12123 out.go:177] * Restarting existing qemu2 VM for "ha-061000" ...
	I1007 05:19:03.837548   12123 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:19:03.837757   12123 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:8d:76:af:18:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/disk.qcow2
	I1007 05:19:03.847613   12123 main.go:141] libmachine: STDOUT: 
	I1007 05:19:03.847675   12123 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:19:03.847742   12123 fix.go:56] duration metric: took 23.259416ms for fixHost
	I1007 05:19:03.847763   12123 start.go:83] releasing machines lock for "ha-061000", held for 23.388125ms
	W1007 05:19:03.847961   12123 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-061000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-061000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:19:03.856481   12123 out.go:201] 
	W1007 05:19:03.860531   12123 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:19:03.860555   12123 out.go:270] * 
	* 
	W1007 05:19:03.862946   12123 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:19:03.871441   12123 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-061000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-061000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000: exit status 7 (36.332542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-darwin-arm64 -p ha-061000 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-061000 node delete m03 -v=7 --alsologtostderr: exit status 83 (47.009708ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-061000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-061000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:19:04.029791   12135 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:19:04.030246   12135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:19:04.030250   12135 out.go:358] Setting ErrFile to fd 2...
	I1007 05:19:04.030252   12135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:19:04.030393   12135 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:19:04.030617   12135 mustload.go:65] Loading cluster: ha-061000
	I1007 05:19:04.030839   12135 config.go:182] Loaded profile config "ha-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:19:04.035412   12135 out.go:177] * The control-plane node ha-061000 host is not running: state=Stopped
	I1007 05:19:04.039291   12135 out.go:177]   To start a cluster, run: "minikube start -p ha-061000"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-061000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:495: (dbg) Run:  out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr
ha_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr: exit status 7 (35.574459ms)

                                                
                                                
-- stdout --
	ha-061000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:19:04.077196   12137 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:19:04.077372   12137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:19:04.077375   12137 out.go:358] Setting ErrFile to fd 2...
	I1007 05:19:04.077377   12137 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:19:04.077497   12137 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:19:04.077631   12137 out.go:352] Setting JSON to false
	I1007 05:19:04.077641   12137 mustload.go:65] Loading cluster: ha-061000
	I1007 05:19:04.077695   12137 notify.go:220] Checking for updates...
	I1007 05:19:04.078288   12137 config.go:182] Loaded profile config "ha-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:19:04.078306   12137 status.go:174] checking status of ha-061000 ...
	I1007 05:19:04.078730   12137 status.go:371] ha-061000 host status = "Stopped" (err=<nil>)
	I1007 05:19:04.078741   12137 status.go:384] host is not running, skipping remaining checks
	I1007 05:19:04.078744   12137 status.go:176] ha-061000 status: &{Name:ha-061000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000: exit status 7 (34.373042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-061000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-061000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-061000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-061000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.3
1.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAut
hSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000: exit status 7 (35.155625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-darwin-arm64 -p ha-061000 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-darwin-arm64 -p ha-061000 stop -v=7 --alsologtostderr: (3.800141083s)
ha_test.go:539: (dbg) Run:  out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr: exit status 7 (69.653333ms)

                                                
                                                
-- stdout --
	ha-061000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:19:08.071590   12166 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:19:08.071763   12166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:19:08.071767   12166 out.go:358] Setting ErrFile to fd 2...
	I1007 05:19:08.071770   12166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:19:08.071921   12166 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:19:08.072065   12166 out.go:352] Setting JSON to false
	I1007 05:19:08.072079   12166 mustload.go:65] Loading cluster: ha-061000
	I1007 05:19:08.072119   12166 notify.go:220] Checking for updates...
	I1007 05:19:08.072354   12166 config.go:182] Loaded profile config "ha-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:19:08.072364   12166 status.go:174] checking status of ha-061000 ...
	I1007 05:19:08.072652   12166 status.go:371] ha-061000 host status = "Stopped" (err=<nil>)
	I1007 05:19:08.072657   12166 status.go:384] host is not running, skipping remaining checks
	I1007 05:19:08.072659   12166 status.go:176] ha-061000 status: &{Name:ha-061000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr": ha-061000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr": ha-061000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-061000 status -v=7 --alsologtostderr": ha-061000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000: exit status 7 (36.389791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-061000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-061000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.189109833s)

                                                
                                                
-- stdout --
	* [ha-061000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-061000" primary control-plane node in "ha-061000" cluster
	* Restarting existing qemu2 VM for "ha-061000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-061000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:19:08.142429   12170 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:19:08.142611   12170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:19:08.142614   12170 out.go:358] Setting ErrFile to fd 2...
	I1007 05:19:08.142616   12170 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:19:08.142739   12170 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:19:08.143797   12170 out.go:352] Setting JSON to false
	I1007 05:19:08.161272   12170 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6519,"bootTime":1728297029,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:19:08.161336   12170 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:19:08.166652   12170 out.go:177] * [ha-061000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:19:08.173517   12170 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:19:08.173565   12170 notify.go:220] Checking for updates...
	I1007 05:19:08.180556   12170 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:19:08.183505   12170 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:19:08.186548   12170 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:19:08.189606   12170 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:19:08.192495   12170 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:19:08.195867   12170 config.go:182] Loaded profile config "ha-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:19:08.196159   12170 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:19:08.200557   12170 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 05:19:08.207520   12170 start.go:297] selected driver: qemu2
	I1007 05:19:08.207528   12170 start.go:901] validating driver "qemu2" against &{Name:ha-061000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.1 ClusterName:ha-061000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:19:08.207596   12170 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:19:08.210065   12170 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:19:08.210087   12170 cni.go:84] Creating CNI manager for ""
	I1007 05:19:08.210106   12170 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 05:19:08.210160   12170 start.go:340] cluster config:
	{Name:ha-061000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-061000 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:19:08.214697   12170 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:19:08.221552   12170 out.go:177] * Starting "ha-061000" primary control-plane node in "ha-061000" cluster
	I1007 05:19:08.225577   12170 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:19:08.225593   12170 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:19:08.225601   12170 cache.go:56] Caching tarball of preloaded images
	I1007 05:19:08.225678   12170 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:19:08.225683   12170 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:19:08.225743   12170 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/ha-061000/config.json ...
	I1007 05:19:08.226140   12170 start.go:360] acquireMachinesLock for ha-061000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:19:08.226173   12170 start.go:364] duration metric: took 26.625µs to acquireMachinesLock for "ha-061000"
	I1007 05:19:08.226182   12170 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:19:08.226187   12170 fix.go:54] fixHost starting: 
	I1007 05:19:08.226306   12170 fix.go:112] recreateIfNeeded on ha-061000: state=Stopped err=<nil>
	W1007 05:19:08.226316   12170 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:19:08.229519   12170 out.go:177] * Restarting existing qemu2 VM for "ha-061000" ...
	I1007 05:19:08.237401   12170 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:19:08.237441   12170 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:8d:76:af:18:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/disk.qcow2
	I1007 05:19:08.239750   12170 main.go:141] libmachine: STDOUT: 
	I1007 05:19:08.239774   12170 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:19:08.239805   12170 fix.go:56] duration metric: took 13.615583ms for fixHost
	I1007 05:19:08.239810   12170 start.go:83] releasing machines lock for "ha-061000", held for 13.632208ms
	W1007 05:19:08.239815   12170 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:19:08.239849   12170 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:19:08.239854   12170 start.go:729] Will try again in 5 seconds ...
	I1007 05:19:13.241998   12170 start.go:360] acquireMachinesLock for ha-061000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:19:13.242549   12170 start.go:364] duration metric: took 437.042µs to acquireMachinesLock for "ha-061000"
	I1007 05:19:13.242710   12170 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:19:13.242733   12170 fix.go:54] fixHost starting: 
	I1007 05:19:13.243463   12170 fix.go:112] recreateIfNeeded on ha-061000: state=Stopped err=<nil>
	W1007 05:19:13.243491   12170 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:19:13.248180   12170 out.go:177] * Restarting existing qemu2 VM for "ha-061000" ...
	I1007 05:19:13.256161   12170 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:19:13.256380   12170 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:8d:76:af:18:81 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/ha-061000/disk.qcow2
	I1007 05:19:13.266823   12170 main.go:141] libmachine: STDOUT: 
	I1007 05:19:13.266875   12170 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:19:13.266949   12170 fix.go:56] duration metric: took 24.219875ms for fixHost
	I1007 05:19:13.266967   12170 start.go:83] releasing machines lock for "ha-061000", held for 24.394458ms
	W1007 05:19:13.267116   12170 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-061000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-061000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:19:13.273163   12170 out.go:201] 
	W1007 05:19:13.276167   12170 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:19:13.276191   12170 out.go:270] * 
	* 
	W1007 05:19:13.278869   12170 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:19:13.286097   12170 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-061000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000: exit status 7 (74.211125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-061000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-061000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-061000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-061000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.3
1.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAut
hSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000: exit status 7 (34.601875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-061000 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-061000 --control-plane -v=7 --alsologtostderr: exit status 83 (44.96675ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-061000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-061000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:19:13.496883   12187 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:19:13.497085   12187 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:19:13.497088   12187 out.go:358] Setting ErrFile to fd 2...
	I1007 05:19:13.497090   12187 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:19:13.497236   12187 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:19:13.497477   12187 mustload.go:65] Loading cluster: ha-061000
	I1007 05:19:13.497692   12187 config.go:182] Loaded profile config "ha-061000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:19:13.502087   12187 out.go:177] * The control-plane node ha-061000 host is not running: state=Stopped
	I1007 05:19:13.505073   12187 out.go:177]   To start a cluster, run: "minikube start -p ha-061000"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-061000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000: exit status 7 (34.403375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-061000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-061000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-061000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServer
Port\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-061000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"Container
Runtime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SS
HAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-061000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-061000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-061000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-061000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-061000 -n ha-061000: exit status 7 (34.060917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-061000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-045000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-045000 --driver=qemu2 : exit status 80 (9.842333s)

                                                
                                                
-- stdout --
	* [image-045000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-045000" primary control-plane node in "image-045000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-045000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-045000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-045000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-045000 -n image-045000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-045000 -n image-045000: exit status 7 (73.156083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-045000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.74s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-174000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-174000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.740281625s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6b8013ce-6962-4218-b810-38224574b182","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-174000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b4407594-d6be-4fb9-8530-b6ee0476447f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18424"}}
	{"specversion":"1.0","id":"67803b54-3393-40f3-a029-81cc007dab5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig"}}
	{"specversion":"1.0","id":"8b8b88ef-d29c-483c-9f86-f4742f7b72d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"9e04987e-6660-44aa-8ae7-80484ab3531b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"06b208ba-30de-4747-835d-37af71951ecd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube"}}
	{"specversion":"1.0","id":"7b62f835-ad96-4ca5-bed4-08d0faf1bbbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2a454ec6-e8fb-4919-a099-f27bde134170","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"47d9929a-3919-4722-a39a-03e41ea28ac5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"2ff622bc-f34d-4266-83d3-f19ac9dfe7ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-174000\" primary control-plane node in \"json-output-174000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9ea3ee62-410a-4d16-b0be-636cbf9c3c9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"a63e2320-7852-4059-a0d1-23385f28a8c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-174000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"b46da430-7377-4e76-b1aa-41ea009d51de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"6ec3113a-eae5-49d3-bafe-24619cd65103","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"cf184737-4cce-477c-9689-acd42163d77d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-174000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"267f2d3f-e58a-40ec-b68b-7b2d828f3e0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"cb66e8f5-0e99-4542-ad15-34f52f653e84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-174000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-174000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-174000 --output=json --user=testUser: exit status 83 (83.761083ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1c44a570-ce0e-455a-ada7-c5a7ae3c7727","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-174000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"20b9ac42-ee22-4bdb-9f85-00bda3b93495","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-174000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-174000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-174000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-174000 --output=json --user=testUser: exit status 83 (49.344708ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-174000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-174000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-174000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-174000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.08s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-846000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-846000 --driver=qemu2 : exit status 80 (9.764335416s)

                                                
                                                
-- stdout --
	* [first-846000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-846000" primary control-plane node in "first-846000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-846000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-846000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-846000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-10-07 05:19:47.110649 -0700 PDT m=+464.392256001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-847000 -n second-847000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-847000 -n second-847000: exit status 85 (87.73275ms)

                                                
                                                
-- stdout --
	* Profile "second-847000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-847000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-847000" host is not running, skipping log retrieval (state="* Profile \"second-847000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-847000\"")
helpers_test.go:175: Cleaning up "second-847000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-847000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-10-07 05:19:47.315127 -0700 PDT m=+464.596738168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-846000 -n first-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-846000 -n first-846000: exit status 7 (34.531917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-846000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-846000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-846000
--- FAIL: TestMinikubeProfile (10.08s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-534000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-534000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.460316708s)

                                                
                                                
-- stdout --
	* [mount-start-1-534000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-534000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-534000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-534000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-534000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-534000 -n mount-start-1-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-534000 -n mount-start-1-534000: exit status 7 (74.213667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-534000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.54s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-062000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-062000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.828596042s)

                                                
                                                
-- stdout --
	* [multinode-062000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-062000" primary control-plane node in "multinode-062000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-062000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:19:58.185442   12337 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:19:58.185602   12337 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:19:58.185605   12337 out.go:358] Setting ErrFile to fd 2...
	I1007 05:19:58.185608   12337 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:19:58.185735   12337 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:19:58.186908   12337 out.go:352] Setting JSON to false
	I1007 05:19:58.204682   12337 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6569,"bootTime":1728297029,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:19:58.204753   12337 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:19:58.210649   12337 out.go:177] * [multinode-062000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:19:58.217678   12337 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:19:58.217738   12337 notify.go:220] Checking for updates...
	I1007 05:19:58.224631   12337 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:19:58.227670   12337 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:19:58.230588   12337 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:19:58.233679   12337 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:19:58.236664   12337 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:19:58.239861   12337 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:19:58.243629   12337 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:19:58.250603   12337 start.go:297] selected driver: qemu2
	I1007 05:19:58.250608   12337 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:19:58.250613   12337 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:19:58.253105   12337 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:19:58.256630   12337 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:19:58.259730   12337 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:19:58.259753   12337 cni.go:84] Creating CNI manager for ""
	I1007 05:19:58.259789   12337 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1007 05:19:58.259793   12337 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 05:19:58.259830   12337 start.go:340] cluster config:
	{Name:multinode-062000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-062000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_v
mnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:19:58.264658   12337 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:19:58.272673   12337 out.go:177] * Starting "multinode-062000" primary control-plane node in "multinode-062000" cluster
	I1007 05:19:58.276506   12337 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:19:58.276522   12337 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:19:58.276532   12337 cache.go:56] Caching tarball of preloaded images
	I1007 05:19:58.276617   12337 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:19:58.276623   12337 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:19:58.276873   12337 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/multinode-062000/config.json ...
	I1007 05:19:58.276884   12337 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/multinode-062000/config.json: {Name:mkf85ad4f48b5d9824eaa500fa198cc926ad27c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:19:58.277266   12337 start.go:360] acquireMachinesLock for multinode-062000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:19:58.277318   12337 start.go:364] duration metric: took 45.584µs to acquireMachinesLock for "multinode-062000"
	I1007 05:19:58.277333   12337 start.go:93] Provisioning new machine with config: &{Name:multinode-062000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:multinode-062000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:19:58.277363   12337 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:19:58.281683   12337 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:19:58.300223   12337 start.go:159] libmachine.API.Create for "multinode-062000" (driver="qemu2")
	I1007 05:19:58.300253   12337 client.go:168] LocalClient.Create starting
	I1007 05:19:58.300320   12337 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:19:58.300366   12337 main.go:141] libmachine: Decoding PEM data...
	I1007 05:19:58.300377   12337 main.go:141] libmachine: Parsing certificate...
	I1007 05:19:58.300422   12337 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:19:58.300452   12337 main.go:141] libmachine: Decoding PEM data...
	I1007 05:19:58.300467   12337 main.go:141] libmachine: Parsing certificate...
	I1007 05:19:58.300919   12337 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:19:58.441953   12337 main.go:141] libmachine: Creating SSH key...
	I1007 05:19:58.558055   12337 main.go:141] libmachine: Creating Disk image...
	I1007 05:19:58.558063   12337 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:19:58.558265   12337 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/disk.qcow2
	I1007 05:19:58.568434   12337 main.go:141] libmachine: STDOUT: 
	I1007 05:19:58.568454   12337 main.go:141] libmachine: STDERR: 
	I1007 05:19:58.568513   12337 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/disk.qcow2 +20000M
	I1007 05:19:58.577005   12337 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:19:58.577021   12337 main.go:141] libmachine: STDERR: 
	I1007 05:19:58.577033   12337 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/disk.qcow2
	I1007 05:19:58.577046   12337 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:19:58.577060   12337 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:19:58.577087   12337 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:9c:a2:c9:46:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/disk.qcow2
	I1007 05:19:58.578926   12337 main.go:141] libmachine: STDOUT: 
	I1007 05:19:58.578941   12337 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:19:58.578961   12337 client.go:171] duration metric: took 278.706291ms to LocalClient.Create
	I1007 05:20:00.581066   12337 start.go:128] duration metric: took 2.303723709s to createHost
	I1007 05:20:00.581124   12337 start.go:83] releasing machines lock for "multinode-062000", held for 2.303833834s
	W1007 05:20:00.581171   12337 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:20:00.592210   12337 out.go:177] * Deleting "multinode-062000" in qemu2 ...
	W1007 05:20:00.611535   12337 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:20:00.611567   12337 start.go:729] Will try again in 5 seconds ...
	I1007 05:20:05.613737   12337 start.go:360] acquireMachinesLock for multinode-062000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:20:05.614274   12337 start.go:364] duration metric: took 444.708µs to acquireMachinesLock for "multinode-062000"
	I1007 05:20:05.614454   12337 start.go:93] Provisioning new machine with config: &{Name:multinode-062000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:multinode-062000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:20:05.614775   12337 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:20:05.620473   12337 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:20:05.670694   12337 start.go:159] libmachine.API.Create for "multinode-062000" (driver="qemu2")
	I1007 05:20:05.670884   12337 client.go:168] LocalClient.Create starting
	I1007 05:20:05.671038   12337 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:20:05.671110   12337 main.go:141] libmachine: Decoding PEM data...
	I1007 05:20:05.671128   12337 main.go:141] libmachine: Parsing certificate...
	I1007 05:20:05.671196   12337 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:20:05.671252   12337 main.go:141] libmachine: Decoding PEM data...
	I1007 05:20:05.671264   12337 main.go:141] libmachine: Parsing certificate...
	I1007 05:20:05.671844   12337 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:20:05.824133   12337 main.go:141] libmachine: Creating SSH key...
	I1007 05:20:05.914790   12337 main.go:141] libmachine: Creating Disk image...
	I1007 05:20:05.914796   12337 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:20:05.914987   12337 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/disk.qcow2
	I1007 05:20:05.924949   12337 main.go:141] libmachine: STDOUT: 
	I1007 05:20:05.924966   12337 main.go:141] libmachine: STDERR: 
	I1007 05:20:05.925025   12337 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/disk.qcow2 +20000M
	I1007 05:20:05.933634   12337 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:20:05.933649   12337 main.go:141] libmachine: STDERR: 
	I1007 05:20:05.933660   12337 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/disk.qcow2
	I1007 05:20:05.933666   12337 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:20:05.933674   12337 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:20:05.933709   12337 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:de:43:39:8d:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/disk.qcow2
	I1007 05:20:05.935548   12337 main.go:141] libmachine: STDOUT: 
	I1007 05:20:05.935563   12337 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:20:05.935575   12337 client.go:171] duration metric: took 264.690834ms to LocalClient.Create
	I1007 05:20:07.937720   12337 start.go:128] duration metric: took 2.322957417s to createHost
	I1007 05:20:07.937778   12337 start.go:83] releasing machines lock for "multinode-062000", held for 2.323520542s
	W1007 05:20:07.938193   12337 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-062000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-062000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:20:07.950757   12337 out.go:201] 
	W1007 05:20:07.954982   12337 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:20:07.955008   12337 out.go:270] * 
	* 
	W1007 05:20:07.957894   12337 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:20:07.965897   12337 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-062000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000: exit status 7 (73.392916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-062000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (93.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-062000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-062000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (63.881417ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-062000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-062000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-062000 -- rollout status deployment/busybox: exit status 1 (62.288458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-062000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-062000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-062000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (61.755458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-062000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 05:20:08.244561   11284 retry.go:31] will retry after 671.761532ms: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-062000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-062000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.555125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-062000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 05:20:09.025344   11284 retry.go:31] will retry after 1.243164016s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-062000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-062000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.818542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-062000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 05:20:10.381682   11284 retry.go:31] will retry after 2.940998729s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-062000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-062000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.614959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-062000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 05:20:13.433797   11284 retry.go:31] will retry after 4.831377008s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-062000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-062000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.456083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-062000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 05:20:18.376995   11284 retry.go:31] will retry after 5.499356353s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-062000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-062000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.324459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-062000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 05:20:23.987128   11284 retry.go:31] will retry after 8.072408285s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-062000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-062000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.60725ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-062000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 05:20:32.170443   11284 retry.go:31] will retry after 14.934598519s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-062000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-062000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.810083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-062000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 05:20:47.216069   11284 retry.go:31] will retry after 24.230086375s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-062000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-062000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.259292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-062000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1007 05:21:11.557460   11284 retry.go:31] will retry after 29.147659296s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-062000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-062000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.555333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-062000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-062000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-062000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.741917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-062000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-062000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-062000 -- exec  -- nslookup kubernetes.io: exit status 1 (61.792167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-062000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-062000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-062000 -- exec  -- nslookup kubernetes.default: exit status 1 (62.3395ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-062000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-062000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-062000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (62.375541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-062000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000: exit status 7 (35.294083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-062000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (93.04s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-062000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-062000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.526417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-062000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000: exit status 7 (34.865292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-062000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-062000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-062000 -v 3 --alsologtostderr: exit status 83 (46.02675ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-062000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-062000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:21:41.229263   12432 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:21:41.229464   12432 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:21:41.229468   12432 out.go:358] Setting ErrFile to fd 2...
	I1007 05:21:41.229470   12432 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:21:41.229600   12432 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:21:41.229840   12432 mustload.go:65] Loading cluster: multinode-062000
	I1007 05:21:41.230051   12432 config.go:182] Loaded profile config "multinode-062000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:21:41.234736   12432 out.go:177] * The control-plane node multinode-062000 host is not running: state=Stopped
	I1007 05:21:41.237609   12432 out.go:177]   To start a cluster, run: "minikube start -p multinode-062000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-062000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000: exit status 7 (35.374541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-062000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-062000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-062000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.774ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-062000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-062000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-062000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000: exit status 7 (35.025917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-062000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-062000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-062000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-062000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVM
NUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-062000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesV
ersion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\
":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000: exit status 7 (34.751583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-062000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-062000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-062000 status --output json --alsologtostderr: exit status 7 (34.991ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-062000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:21:41.461443   12444 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:21:41.461626   12444 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:21:41.461629   12444 out.go:358] Setting ErrFile to fd 2...
	I1007 05:21:41.461631   12444 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:21:41.461774   12444 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:21:41.461913   12444 out.go:352] Setting JSON to true
	I1007 05:21:41.461924   12444 mustload.go:65] Loading cluster: multinode-062000
	I1007 05:21:41.461986   12444 notify.go:220] Checking for updates...
	I1007 05:21:41.462113   12444 config.go:182] Loaded profile config "multinode-062000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:21:41.462122   12444 status.go:174] checking status of multinode-062000 ...
	I1007 05:21:41.462397   12444 status.go:371] multinode-062000 host status = "Stopped" (err=<nil>)
	I1007 05:21:41.462400   12444 status.go:384] host is not running, skipping remaining checks
	I1007 05:21:41.462402   12444 status.go:176] multinode-062000 status: &{Name:multinode-062000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-062000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000: exit status 7 (34.476083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-062000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-062000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-062000 node stop m03: exit status 85 (52.481416ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-062000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-062000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-062000 status: exit status 7 (34.005833ms)

                                                
                                                
-- stdout --
	multinode-062000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-062000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-062000 status --alsologtostderr: exit status 7 (34.196875ms)

                                                
                                                
-- stdout --
	multinode-062000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:21:41.617626   12454 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:21:41.617802   12454 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:21:41.617805   12454 out.go:358] Setting ErrFile to fd 2...
	I1007 05:21:41.617808   12454 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:21:41.617946   12454 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:21:41.618064   12454 out.go:352] Setting JSON to false
	I1007 05:21:41.618076   12454 mustload.go:65] Loading cluster: multinode-062000
	I1007 05:21:41.618130   12454 notify.go:220] Checking for updates...
	I1007 05:21:41.618278   12454 config.go:182] Loaded profile config "multinode-062000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:21:41.618286   12454 status.go:174] checking status of multinode-062000 ...
	I1007 05:21:41.618515   12454 status.go:371] multinode-062000 host status = "Stopped" (err=<nil>)
	I1007 05:21:41.618518   12454 status.go:384] host is not running, skipping remaining checks
	I1007 05:21:41.618520   12454 status.go:176] multinode-062000 status: &{Name:multinode-062000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-062000 status --alsologtostderr": multinode-062000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000: exit status 7 (35.042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-062000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (45.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-062000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-062000 node start m03 -v=7 --alsologtostderr: exit status 85 (52.273667ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:21:41.687821   12458 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:21:41.688405   12458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:21:41.688409   12458 out.go:358] Setting ErrFile to fd 2...
	I1007 05:21:41.688411   12458 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:21:41.688600   12458 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:21:41.688814   12458 mustload.go:65] Loading cluster: multinode-062000
	I1007 05:21:41.689026   12458 config.go:182] Loaded profile config "multinode-062000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:21:41.693255   12458 out.go:201] 
	W1007 05:21:41.697147   12458 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1007 05:21:41.697153   12458 out.go:270] * 
	* 
	W1007 05:21:41.699146   12458 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:21:41.702197   12458 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1007 05:21:41.687821   12458 out.go:345] Setting OutFile to fd 1 ...
I1007 05:21:41.688405   12458 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 05:21:41.688409   12458 out.go:358] Setting ErrFile to fd 2...
I1007 05:21:41.688411   12458 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 05:21:41.688600   12458 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
I1007 05:21:41.688814   12458 mustload.go:65] Loading cluster: multinode-062000
I1007 05:21:41.689026   12458 config.go:182] Loaded profile config "multinode-062000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1007 05:21:41.693255   12458 out.go:201] 
W1007 05:21:41.697147   12458 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1007 05:21:41.697153   12458 out.go:270] * 
* 
W1007 05:21:41.699146   12458 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1007 05:21:41.702197   12458 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-062000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-062000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-062000 status -v=7 --alsologtostderr: exit status 7 (35.488084ms)

                                                
                                                
-- stdout --
	multinode-062000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:21:41.739887   12460 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:21:41.740064   12460 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:21:41.740067   12460 out.go:358] Setting ErrFile to fd 2...
	I1007 05:21:41.740070   12460 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:21:41.740188   12460 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:21:41.740310   12460 out.go:352] Setting JSON to false
	I1007 05:21:41.740321   12460 mustload.go:65] Loading cluster: multinode-062000
	I1007 05:21:41.740393   12460 notify.go:220] Checking for updates...
	I1007 05:21:41.741034   12460 config.go:182] Loaded profile config "multinode-062000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:21:41.741100   12460 status.go:174] checking status of multinode-062000 ...
	I1007 05:21:41.741622   12460 status.go:371] multinode-062000 host status = "Stopped" (err=<nil>)
	I1007 05:21:41.741628   12460 status.go:384] host is not running, skipping remaining checks
	I1007 05:21:41.741630   12460 status.go:176] multinode-062000 status: &{Name:multinode-062000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 05:21:41.742682   11284 retry.go:31] will retry after 931.200531ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-062000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-062000 status -v=7 --alsologtostderr: exit status 7 (79.3ms)

                                                
                                                
-- stdout --
	multinode-062000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:21:42.753348   12463 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:21:42.753566   12463 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:21:42.753570   12463 out.go:358] Setting ErrFile to fd 2...
	I1007 05:21:42.753573   12463 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:21:42.753759   12463 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:21:42.753924   12463 out.go:352] Setting JSON to false
	I1007 05:21:42.753938   12463 mustload.go:65] Loading cluster: multinode-062000
	I1007 05:21:42.753971   12463 notify.go:220] Checking for updates...
	I1007 05:21:42.754176   12463 config.go:182] Loaded profile config "multinode-062000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:21:42.754185   12463 status.go:174] checking status of multinode-062000 ...
	I1007 05:21:42.754479   12463 status.go:371] multinode-062000 host status = "Stopped" (err=<nil>)
	I1007 05:21:42.754484   12463 status.go:384] host is not running, skipping remaining checks
	I1007 05:21:42.754486   12463 status.go:176] multinode-062000 status: &{Name:multinode-062000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 05:21:42.755538   11284 retry.go:31] will retry after 1.367981188s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-062000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-062000 status -v=7 --alsologtostderr: exit status 7 (79.328125ms)

                                                
                                                
-- stdout --
	multinode-062000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:21:44.203020   12466 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:21:44.203252   12466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:21:44.203256   12466 out.go:358] Setting ErrFile to fd 2...
	I1007 05:21:44.203259   12466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:21:44.203410   12466 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:21:44.203552   12466 out.go:352] Setting JSON to false
	I1007 05:21:44.203568   12466 mustload.go:65] Loading cluster: multinode-062000
	I1007 05:21:44.203597   12466 notify.go:220] Checking for updates...
	I1007 05:21:44.203797   12466 config.go:182] Loaded profile config "multinode-062000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:21:44.203806   12466 status.go:174] checking status of multinode-062000 ...
	I1007 05:21:44.204091   12466 status.go:371] multinode-062000 host status = "Stopped" (err=<nil>)
	I1007 05:21:44.204096   12466 status.go:384] host is not running, skipping remaining checks
	I1007 05:21:44.204098   12466 status.go:176] multinode-062000 status: &{Name:multinode-062000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 05:21:44.205171   11284 retry.go:31] will retry after 2.605598237s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-062000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-062000 status -v=7 --alsologtostderr: exit status 7 (80.272208ms)

                                                
                                                
-- stdout --
	multinode-062000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:21:46.891291   12470 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:21:46.891504   12470 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:21:46.891508   12470 out.go:358] Setting ErrFile to fd 2...
	I1007 05:21:46.891511   12470 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:21:46.891666   12470 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:21:46.891837   12470 out.go:352] Setting JSON to false
	I1007 05:21:46.891851   12470 mustload.go:65] Loading cluster: multinode-062000
	I1007 05:21:46.891896   12470 notify.go:220] Checking for updates...
	I1007 05:21:46.892096   12470 config.go:182] Loaded profile config "multinode-062000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:21:46.892109   12470 status.go:174] checking status of multinode-062000 ...
	I1007 05:21:46.892398   12470 status.go:371] multinode-062000 host status = "Stopped" (err=<nil>)
	I1007 05:21:46.892403   12470 status.go:384] host is not running, skipping remaining checks
	I1007 05:21:46.892405   12470 status.go:176] multinode-062000 status: &{Name:multinode-062000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 05:21:46.893464   11284 retry.go:31] will retry after 3.146134626s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-062000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-062000 status -v=7 --alsologtostderr: exit status 7 (79.391375ms)

                                                
                                                
-- stdout --
	multinode-062000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:21:50.119302   12477 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:21:50.119498   12477 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:21:50.119502   12477 out.go:358] Setting ErrFile to fd 2...
	I1007 05:21:50.119505   12477 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:21:50.119666   12477 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:21:50.119824   12477 out.go:352] Setting JSON to false
	I1007 05:21:50.119837   12477 mustload.go:65] Loading cluster: multinode-062000
	I1007 05:21:50.119870   12477 notify.go:220] Checking for updates...
	I1007 05:21:50.120071   12477 config.go:182] Loaded profile config "multinode-062000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:21:50.120080   12477 status.go:174] checking status of multinode-062000 ...
	I1007 05:21:50.120367   12477 status.go:371] multinode-062000 host status = "Stopped" (err=<nil>)
	I1007 05:21:50.120372   12477 status.go:384] host is not running, skipping remaining checks
	I1007 05:21:50.120374   12477 status.go:176] multinode-062000 status: &{Name:multinode-062000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 05:21:50.121387   11284 retry.go:31] will retry after 5.608964942s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-062000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-062000 status -v=7 --alsologtostderr: exit status 7 (78.801459ms)

                                                
                                                
-- stdout --
	multinode-062000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:21:55.809414   12481 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:21:55.809638   12481 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:21:55.809643   12481 out.go:358] Setting ErrFile to fd 2...
	I1007 05:21:55.809645   12481 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:21:55.809818   12481 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:21:55.809972   12481 out.go:352] Setting JSON to false
	I1007 05:21:55.809988   12481 mustload.go:65] Loading cluster: multinode-062000
	I1007 05:21:55.810042   12481 notify.go:220] Checking for updates...
	I1007 05:21:55.810241   12481 config.go:182] Loaded profile config "multinode-062000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:21:55.810250   12481 status.go:174] checking status of multinode-062000 ...
	I1007 05:21:55.810578   12481 status.go:371] multinode-062000 host status = "Stopped" (err=<nil>)
	I1007 05:21:55.810583   12481 status.go:384] host is not running, skipping remaining checks
	I1007 05:21:55.810585   12481 status.go:176] multinode-062000 status: &{Name:multinode-062000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 05:21:55.811614   11284 retry.go:31] will retry after 8.655488685s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-062000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-062000 status -v=7 --alsologtostderr: exit status 7 (78.789125ms)

                                                
                                                
-- stdout --
	multinode-062000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:22:04.545982   12489 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:22:04.546231   12489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:22:04.546235   12489 out.go:358] Setting ErrFile to fd 2...
	I1007 05:22:04.546238   12489 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:22:04.546409   12489 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:22:04.546585   12489 out.go:352] Setting JSON to false
	I1007 05:22:04.546599   12489 mustload.go:65] Loading cluster: multinode-062000
	I1007 05:22:04.546644   12489 notify.go:220] Checking for updates...
	I1007 05:22:04.546888   12489 config.go:182] Loaded profile config "multinode-062000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:22:04.546897   12489 status.go:174] checking status of multinode-062000 ...
	I1007 05:22:04.547206   12489 status.go:371] multinode-062000 host status = "Stopped" (err=<nil>)
	I1007 05:22:04.547210   12489 status.go:384] host is not running, skipping remaining checks
	I1007 05:22:04.547213   12489 status.go:176] multinode-062000 status: &{Name:multinode-062000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 05:22:04.548265   11284 retry.go:31] will retry after 8.614316311s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-062000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-062000 status -v=7 --alsologtostderr: exit status 7 (80.74975ms)

                                                
                                                
-- stdout --
	multinode-062000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:22:13.243488   12495 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:22:13.243702   12495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:22:13.243706   12495 out.go:358] Setting ErrFile to fd 2...
	I1007 05:22:13.243709   12495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:22:13.243867   12495 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:22:13.244014   12495 out.go:352] Setting JSON to false
	I1007 05:22:13.244028   12495 mustload.go:65] Loading cluster: multinode-062000
	I1007 05:22:13.244064   12495 notify.go:220] Checking for updates...
	I1007 05:22:13.244281   12495 config.go:182] Loaded profile config "multinode-062000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:22:13.244291   12495 status.go:174] checking status of multinode-062000 ...
	I1007 05:22:13.244589   12495 status.go:371] multinode-062000 host status = "Stopped" (err=<nil>)
	I1007 05:22:13.244594   12495 status.go:384] host is not running, skipping remaining checks
	I1007 05:22:13.244596   12495 status.go:176] multinode-062000 status: &{Name:multinode-062000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1007 05:22:13.245641   11284 retry.go:31] will retry after 13.828207719s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-062000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-062000 status -v=7 --alsologtostderr: exit status 7 (80.799458ms)

                                                
                                                
-- stdout --
	multinode-062000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:22:27.154734   12506 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:22:27.154984   12506 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:22:27.154989   12506 out.go:358] Setting ErrFile to fd 2...
	I1007 05:22:27.154992   12506 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:22:27.155162   12506 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:22:27.155338   12506 out.go:352] Setting JSON to false
	I1007 05:22:27.155352   12506 mustload.go:65] Loading cluster: multinode-062000
	I1007 05:22:27.155396   12506 notify.go:220] Checking for updates...
	I1007 05:22:27.155622   12506 config.go:182] Loaded profile config "multinode-062000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:22:27.155634   12506 status.go:174] checking status of multinode-062000 ...
	I1007 05:22:27.155946   12506 status.go:371] multinode-062000 host status = "Stopped" (err=<nil>)
	I1007 05:22:27.155950   12506 status.go:384] host is not running, skipping remaining checks
	I1007 05:22:27.155953   12506 status.go:176] multinode-062000 status: &{Name:multinode-062000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-062000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000: exit status 7 (35.7205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-062000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (45.54s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-062000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-062000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-062000: (2.067379084s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-062000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-062000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.234203042s)

                                                
                                                
-- stdout --
	* [multinode-062000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-062000" primary control-plane node in "multinode-062000" cluster
	* Restarting existing qemu2 VM for "multinode-062000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-062000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:22:29.367247   12526 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:22:29.367425   12526 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:22:29.367429   12526 out.go:358] Setting ErrFile to fd 2...
	I1007 05:22:29.367433   12526 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:22:29.367584   12526 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:22:29.368923   12526 out.go:352] Setting JSON to false
	I1007 05:22:29.388981   12526 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6720,"bootTime":1728297029,"procs":526,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:22:29.389052   12526 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:22:29.393955   12526 out.go:177] * [multinode-062000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:22:29.399933   12526 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:22:29.399973   12526 notify.go:220] Checking for updates...
	I1007 05:22:29.406845   12526 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:22:29.409911   12526 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:22:29.412875   12526 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:22:29.414102   12526 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:22:29.416837   12526 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:22:29.420217   12526 config.go:182] Loaded profile config "multinode-062000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:22:29.420266   12526 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:22:29.424735   12526 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 05:22:29.431893   12526 start.go:297] selected driver: qemu2
	I1007 05:22:29.431899   12526 start.go:901] validating driver "qemu2" against &{Name:multinode-062000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:multinode-062000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:22:29.431955   12526 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:22:29.434419   12526 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:22:29.434446   12526 cni.go:84] Creating CNI manager for ""
	I1007 05:22:29.434470   12526 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 05:22:29.434518   12526 start.go:340] cluster config:
	{Name:multinode-062000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-062000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:22:29.438920   12526 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:22:29.446833   12526 out.go:177] * Starting "multinode-062000" primary control-plane node in "multinode-062000" cluster
	I1007 05:22:29.450957   12526 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:22:29.450974   12526 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:22:29.450983   12526 cache.go:56] Caching tarball of preloaded images
	I1007 05:22:29.451068   12526 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:22:29.451073   12526 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:22:29.451137   12526 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/multinode-062000/config.json ...
	I1007 05:22:29.451545   12526 start.go:360] acquireMachinesLock for multinode-062000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:22:29.451593   12526 start.go:364] duration metric: took 42.458µs to acquireMachinesLock for "multinode-062000"
	I1007 05:22:29.451606   12526 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:22:29.451612   12526 fix.go:54] fixHost starting: 
	I1007 05:22:29.451734   12526 fix.go:112] recreateIfNeeded on multinode-062000: state=Stopped err=<nil>
	W1007 05:22:29.451743   12526 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:22:29.455854   12526 out.go:177] * Restarting existing qemu2 VM for "multinode-062000" ...
	I1007 05:22:29.463829   12526 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:22:29.463876   12526 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:de:43:39:8d:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/disk.qcow2
	I1007 05:22:29.466120   12526 main.go:141] libmachine: STDOUT: 
	I1007 05:22:29.466141   12526 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:22:29.466172   12526 fix.go:56] duration metric: took 14.559125ms for fixHost
	I1007 05:22:29.466176   12526 start.go:83] releasing machines lock for "multinode-062000", held for 14.57925ms
	W1007 05:22:29.466182   12526 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:22:29.466228   12526 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:22:29.466232   12526 start.go:729] Will try again in 5 seconds ...
	I1007 05:22:34.468306   12526 start.go:360] acquireMachinesLock for multinode-062000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:22:34.468837   12526 start.go:364] duration metric: took 434.584µs to acquireMachinesLock for "multinode-062000"
	I1007 05:22:34.468953   12526 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:22:34.468976   12526 fix.go:54] fixHost starting: 
	I1007 05:22:34.469756   12526 fix.go:112] recreateIfNeeded on multinode-062000: state=Stopped err=<nil>
	W1007 05:22:34.469802   12526 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:22:34.474357   12526 out.go:177] * Restarting existing qemu2 VM for "multinode-062000" ...
	I1007 05:22:34.482207   12526 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:22:34.482442   12526 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:de:43:39:8d:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/disk.qcow2
	I1007 05:22:34.492531   12526 main.go:141] libmachine: STDOUT: 
	I1007 05:22:34.492583   12526 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:22:34.492680   12526 fix.go:56] duration metric: took 23.703208ms for fixHost
	I1007 05:22:34.492696   12526 start.go:83] releasing machines lock for "multinode-062000", held for 23.83575ms
	W1007 05:22:34.492898   12526 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-062000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-062000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:22:34.501354   12526 out.go:201] 
	W1007 05:22:34.505456   12526 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:22:34.505523   12526 out.go:270] * 
	* 
	W1007 05:22:34.508007   12526 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:22:34.516290   12526 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-062000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-062000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000: exit status 7 (35.996541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-062000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-062000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-062000 node delete m03: exit status 83 (47.456292ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-062000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-062000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-062000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-062000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-062000 status --alsologtostderr: exit status 7 (35.157792ms)

                                                
                                                
-- stdout --
	multinode-062000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:22:34.723200   12542 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:22:34.723382   12542 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:22:34.723386   12542 out.go:358] Setting ErrFile to fd 2...
	I1007 05:22:34.723388   12542 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:22:34.723514   12542 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:22:34.723648   12542 out.go:352] Setting JSON to false
	I1007 05:22:34.723659   12542 mustload.go:65] Loading cluster: multinode-062000
	I1007 05:22:34.723722   12542 notify.go:220] Checking for updates...
	I1007 05:22:34.723859   12542 config.go:182] Loaded profile config "multinode-062000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:22:34.723870   12542 status.go:174] checking status of multinode-062000 ...
	I1007 05:22:34.724111   12542 status.go:371] multinode-062000 host status = "Stopped" (err=<nil>)
	I1007 05:22:34.724114   12542 status.go:384] host is not running, skipping remaining checks
	I1007 05:22:34.724117   12542 status.go:176] multinode-062000 status: &{Name:multinode-062000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-062000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000: exit status 7 (34.337583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-062000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-062000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-062000 stop: (3.487843792s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-062000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-062000 status: exit status 7 (77.211541ms)

                                                
                                                
-- stdout --
	multinode-062000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-062000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-062000 status --alsologtostderr: exit status 7 (36.088083ms)

                                                
                                                
-- stdout --
	multinode-062000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:22:38.359275   12568 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:22:38.359453   12568 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:22:38.359456   12568 out.go:358] Setting ErrFile to fd 2...
	I1007 05:22:38.359458   12568 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:22:38.359590   12568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:22:38.359709   12568 out.go:352] Setting JSON to false
	I1007 05:22:38.359721   12568 mustload.go:65] Loading cluster: multinode-062000
	I1007 05:22:38.359792   12568 notify.go:220] Checking for updates...
	I1007 05:22:38.359936   12568 config.go:182] Loaded profile config "multinode-062000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:22:38.359945   12568 status.go:174] checking status of multinode-062000 ...
	I1007 05:22:38.360199   12568 status.go:371] multinode-062000 host status = "Stopped" (err=<nil>)
	I1007 05:22:38.360203   12568 status.go:384] host is not running, skipping remaining checks
	I1007 05:22:38.360205   12568 status.go:176] multinode-062000 status: &{Name:multinode-062000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-062000 status --alsologtostderr": multinode-062000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-062000 status --alsologtostderr": multinode-062000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000: exit status 7 (34.466166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-062000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-062000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-062000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.193896458s)

                                                
                                                
-- stdout --
	* [multinode-062000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-062000" primary control-plane node in "multinode-062000" cluster
	* Restarting existing qemu2 VM for "multinode-062000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-062000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:22:38.428373   12572 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:22:38.428523   12572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:22:38.428526   12572 out.go:358] Setting ErrFile to fd 2...
	I1007 05:22:38.428529   12572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:22:38.428653   12572 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:22:38.429688   12572 out.go:352] Setting JSON to false
	I1007 05:22:38.447372   12572 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6729,"bootTime":1728297029,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:22:38.447441   12572 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:22:38.451937   12572 out.go:177] * [multinode-062000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:22:38.459836   12572 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:22:38.459895   12572 notify.go:220] Checking for updates...
	I1007 05:22:38.465137   12572 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:22:38.467820   12572 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:22:38.470787   12572 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:22:38.473804   12572 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:22:38.476752   12572 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:22:38.480126   12572 config.go:182] Loaded profile config "multinode-062000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:22:38.480406   12572 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:22:38.484774   12572 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 05:22:38.491769   12572 start.go:297] selected driver: qemu2
	I1007 05:22:38.491778   12572 start.go:901] validating driver "qemu2" against &{Name:multinode-062000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:multinode-062000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:22:38.491858   12572 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:22:38.494262   12572 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:22:38.494282   12572 cni.go:84] Creating CNI manager for ""
	I1007 05:22:38.494302   12572 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1007 05:22:38.494342   12572 start.go:340] cluster config:
	{Name:multinode-062000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-062000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:22:38.498797   12572 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:22:38.506762   12572 out.go:177] * Starting "multinode-062000" primary control-plane node in "multinode-062000" cluster
	I1007 05:22:38.510751   12572 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:22:38.510763   12572 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:22:38.510768   12572 cache.go:56] Caching tarball of preloaded images
	I1007 05:22:38.510843   12572 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:22:38.510849   12572 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:22:38.510907   12572 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/multinode-062000/config.json ...
	I1007 05:22:38.511319   12572 start.go:360] acquireMachinesLock for multinode-062000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:22:38.511350   12572 start.go:364] duration metric: took 24.292µs to acquireMachinesLock for "multinode-062000"
	I1007 05:22:38.511359   12572 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:22:38.511364   12572 fix.go:54] fixHost starting: 
	I1007 05:22:38.511495   12572 fix.go:112] recreateIfNeeded on multinode-062000: state=Stopped err=<nil>
	W1007 05:22:38.511504   12572 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:22:38.519807   12572 out.go:177] * Restarting existing qemu2 VM for "multinode-062000" ...
	I1007 05:22:38.523745   12572 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:22:38.523781   12572 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:de:43:39:8d:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/disk.qcow2
	I1007 05:22:38.525919   12572 main.go:141] libmachine: STDOUT: 
	I1007 05:22:38.525939   12572 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:22:38.525971   12572 fix.go:56] duration metric: took 14.605292ms for fixHost
	I1007 05:22:38.525976   12572 start.go:83] releasing machines lock for "multinode-062000", held for 14.622375ms
	W1007 05:22:38.525982   12572 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:22:38.526034   12572 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:22:38.526038   12572 start.go:729] Will try again in 5 seconds ...
	I1007 05:22:43.528115   12572 start.go:360] acquireMachinesLock for multinode-062000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:22:43.528673   12572 start.go:364] duration metric: took 470.958µs to acquireMachinesLock for "multinode-062000"
	I1007 05:22:43.528809   12572 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:22:43.528831   12572 fix.go:54] fixHost starting: 
	I1007 05:22:43.529646   12572 fix.go:112] recreateIfNeeded on multinode-062000: state=Stopped err=<nil>
	W1007 05:22:43.529674   12572 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:22:43.538216   12572 out.go:177] * Restarting existing qemu2 VM for "multinode-062000" ...
	I1007 05:22:43.542264   12572 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:22:43.542424   12572 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:de:43:39:8d:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/multinode-062000/disk.qcow2
	I1007 05:22:43.552511   12572 main.go:141] libmachine: STDOUT: 
	I1007 05:22:43.552560   12572 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:22:43.552720   12572 fix.go:56] duration metric: took 23.810917ms for fixHost
	I1007 05:22:43.552739   12572 start.go:83] releasing machines lock for "multinode-062000", held for 24.041334ms
	W1007 05:22:43.552969   12572 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-062000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-062000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:22:43.561231   12572 out.go:201] 
	W1007 05:22:43.565247   12572 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:22:43.565289   12572 out.go:270] * 
	* 
	W1007 05:22:43.567880   12572 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:22:43.576165   12572 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-062000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000: exit status 7 (75.87875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-062000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-062000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-062000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-062000-m01 --driver=qemu2 : exit status 80 (10.173608084s)

                                                
                                                
-- stdout --
	* [multinode-062000-m01] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-062000-m01" primary control-plane node in "multinode-062000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-062000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-062000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-062000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-062000-m02 --driver=qemu2 : exit status 80 (9.947685459s)

                                                
                                                
-- stdout --
	* [multinode-062000-m02] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-062000-m02" primary control-plane node in "multinode-062000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-062000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-062000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-062000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-062000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-062000: exit status 83 (89.526583ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-062000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-062000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-062000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-062000 -n multinode-062000: exit status 7 (34.900833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-062000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.36s)

                                                
                                    
x
+
TestPreload (9.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-706000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-706000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.7667765s)

                                                
                                                
-- stdout --
	* [test-preload-706000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-706000" primary control-plane node in "test-preload-706000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-706000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:23:04.178765   12643 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:23:04.178928   12643 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:23:04.178931   12643 out.go:358] Setting ErrFile to fd 2...
	I1007 05:23:04.178934   12643 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:23:04.179063   12643 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:23:04.180269   12643 out.go:352] Setting JSON to false
	I1007 05:23:04.197945   12643 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6755,"bootTime":1728297029,"procs":531,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:23:04.198047   12643 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:23:04.202682   12643 out.go:177] * [test-preload-706000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:23:04.209583   12643 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:23:04.209613   12643 notify.go:220] Checking for updates...
	I1007 05:23:04.216515   12643 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:23:04.219567   12643 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:23:04.222577   12643 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:23:04.225511   12643 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:23:04.228530   12643 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:23:04.231980   12643 config.go:182] Loaded profile config "multinode-062000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:23:04.232034   12643 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:23:04.236506   12643 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:23:04.243542   12643 start.go:297] selected driver: qemu2
	I1007 05:23:04.243549   12643 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:23:04.243555   12643 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:23:04.246022   12643 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:23:04.249456   12643 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:23:04.252593   12643 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:23:04.252624   12643 cni.go:84] Creating CNI manager for ""
	I1007 05:23:04.252644   12643 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:23:04.252649   12643 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 05:23:04.252690   12643 start.go:340] cluster config:
	{Name:test-preload-706000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-706000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:23:04.257399   12643 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:23:04.264508   12643 out.go:177] * Starting "test-preload-706000" primary control-plane node in "test-preload-706000" cluster
	I1007 05:23:04.268536   12643 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1007 05:23:04.268652   12643 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/test-preload-706000/config.json ...
	I1007 05:23:04.268671   12643 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/test-preload-706000/config.json: {Name:mk0a6e4d9f7c7932f16e549e388d8752a1a48b78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:23:04.268666   12643 cache.go:107] acquiring lock: {Name:mk8efece51cdcb9f88d49f66f9abcf441e534f05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:23:04.268686   12643 cache.go:107] acquiring lock: {Name:mk2a9b8c79fbe2b9606ed5d559574be7a7e8ccc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:23:04.268703   12643 cache.go:107] acquiring lock: {Name:mkef31b01274a4e2bb8954215534e72f1130ec18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:23:04.268856   12643 cache.go:107] acquiring lock: {Name:mk8adacce4970b592c0c0fdb2300098a31ed42b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:23:04.268900   12643 cache.go:107] acquiring lock: {Name:mkcdb2a1308b3c63135cd6cfb83008a2df6c126c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:23:04.268862   12643 cache.go:107] acquiring lock: {Name:mk6d66d93da35e7da4681990cef206dbf290501e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:23:04.268876   12643 cache.go:107] acquiring lock: {Name:mk14a091085e1f0f1f36e4021244819afb46bb0a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:23:04.269038   12643 start.go:360] acquireMachinesLock for test-preload-706000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:23:04.268862   12643 cache.go:107] acquiring lock: {Name:mk0afa53d8c0c205a143aabc252481579c2bbb2c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:23:04.269154   12643 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1007 05:23:04.269306   12643 start.go:364] duration metric: took 257.292µs to acquireMachinesLock for "test-preload-706000"
	I1007 05:23:04.269325   12643 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1007 05:23:04.269390   12643 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:23:04.269418   12643 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1007 05:23:04.269495   12643 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1007 05:23:04.269535   12643 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1007 05:23:04.269556   12643 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1007 05:23:04.269345   12643 start.go:93] Provisioning new machine with config: &{Name:test-preload-706000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.24.4 ClusterName:test-preload-706000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:23:04.269589   12643 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:23:04.269589   12643 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:23:04.276353   12643 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:23:04.280632   12643 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1007 05:23:04.280683   12643 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1007 05:23:04.280757   12643 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1007 05:23:04.281268   12643 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1007 05:23:04.283872   12643 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1007 05:23:04.283981   12643 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1007 05:23:04.284002   12643 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:23:04.283999   12643 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:23:04.295109   12643 start.go:159] libmachine.API.Create for "test-preload-706000" (driver="qemu2")
	I1007 05:23:04.295131   12643 client.go:168] LocalClient.Create starting
	I1007 05:23:04.295215   12643 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:23:04.295253   12643 main.go:141] libmachine: Decoding PEM data...
	I1007 05:23:04.295271   12643 main.go:141] libmachine: Parsing certificate...
	I1007 05:23:04.295313   12643 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:23:04.295348   12643 main.go:141] libmachine: Decoding PEM data...
	I1007 05:23:04.295355   12643 main.go:141] libmachine: Parsing certificate...
	I1007 05:23:04.295774   12643 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:23:04.443906   12643 main.go:141] libmachine: Creating SSH key...
	I1007 05:23:04.524418   12643 main.go:141] libmachine: Creating Disk image...
	I1007 05:23:04.524443   12643 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:23:04.524653   12643 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/test-preload-706000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/test-preload-706000/disk.qcow2
	I1007 05:23:04.535453   12643 main.go:141] libmachine: STDOUT: 
	I1007 05:23:04.535476   12643 main.go:141] libmachine: STDERR: 
	I1007 05:23:04.535535   12643 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/test-preload-706000/disk.qcow2 +20000M
	I1007 05:23:04.544379   12643 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:23:04.544400   12643 main.go:141] libmachine: STDERR: 
	I1007 05:23:04.544423   12643 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/test-preload-706000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/test-preload-706000/disk.qcow2
	I1007 05:23:04.544427   12643 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:23:04.544441   12643 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:23:04.544470   12643 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/test-preload-706000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/test-preload-706000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/test-preload-706000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:03:83:20:db:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/test-preload-706000/disk.qcow2
	I1007 05:23:04.546543   12643 main.go:141] libmachine: STDOUT: 
	I1007 05:23:04.546563   12643 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:23:04.546586   12643 client.go:171] duration metric: took 251.454ms to LocalClient.Create
	I1007 05:23:04.762196   12643 cache.go:162] opening:  /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1007 05:23:04.762821   12643 cache.go:162] opening:  /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1007 05:23:04.792525   12643 cache.go:162] opening:  /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1007 05:23:04.907556   12643 cache.go:162] opening:  /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1007 05:23:04.920025   12643 cache.go:162] opening:  /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I1007 05:23:04.987981   12643 cache.go:157] /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1007 05:23:04.987995   12643 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 719.235291ms
	I1007 05:23:04.988009   12643 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I1007 05:23:05.028732   12643 cache.go:162] opening:  /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W1007 05:23:05.078952   12643 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1007 05:23:05.078989   12643 cache.go:162] opening:  /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	W1007 05:23:05.580728   12643 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1007 05:23:05.580822   12643 cache.go:162] opening:  /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1007 05:23:06.051822   12643 cache.go:157] /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1007 05:23:06.051903   12643 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.783266709s
	I1007 05:23:06.051935   12643 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1007 05:23:06.546798   12643 start.go:128] duration metric: took 2.277199584s to createHost
	I1007 05:23:06.546854   12643 start.go:83] releasing machines lock for "test-preload-706000", held for 2.277575083s
	W1007 05:23:06.546906   12643 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:23:06.560348   12643 out.go:177] * Deleting "test-preload-706000" in qemu2 ...
	W1007 05:23:06.585820   12643 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:23:06.585854   12643 start.go:729] Will try again in 5 seconds ...
	I1007 05:23:06.802109   12643 cache.go:157] /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1007 05:23:06.802162   12643 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.533535834s
	I1007 05:23:06.802187   12643 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1007 05:23:06.855167   12643 cache.go:157] /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1007 05:23:06.855205   12643 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.586350083s
	I1007 05:23:06.855226   12643 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1007 05:23:08.119604   12643 cache.go:157] /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1007 05:23:08.119646   12643 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 3.850917791s
	I1007 05:23:08.119701   12643 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1007 05:23:10.007277   12643 cache.go:157] /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1007 05:23:10.007351   12643 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.738783416s
	I1007 05:23:10.007381   12643 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1007 05:23:10.370222   12643 cache.go:157] /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1007 05:23:10.370332   12643 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.101569583s
	I1007 05:23:10.370365   12643 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1007 05:23:11.586062   12643 start.go:360] acquireMachinesLock for test-preload-706000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:23:11.586572   12643 start.go:364] duration metric: took 420.208µs to acquireMachinesLock for "test-preload-706000"
	I1007 05:23:11.586709   12643 start.go:93] Provisioning new machine with config: &{Name:test-preload-706000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.24.4 ClusterName:test-preload-706000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:23:11.586914   12643 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:23:11.593609   12643 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:23:11.636170   12643 start.go:159] libmachine.API.Create for "test-preload-706000" (driver="qemu2")
	I1007 05:23:11.636297   12643 client.go:168] LocalClient.Create starting
	I1007 05:23:11.636433   12643 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:23:11.636509   12643 main.go:141] libmachine: Decoding PEM data...
	I1007 05:23:11.636530   12643 main.go:141] libmachine: Parsing certificate...
	I1007 05:23:11.636587   12643 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:23:11.636643   12643 main.go:141] libmachine: Decoding PEM data...
	I1007 05:23:11.636657   12643 main.go:141] libmachine: Parsing certificate...
	I1007 05:23:11.637195   12643 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:23:11.787656   12643 main.go:141] libmachine: Creating SSH key...
	I1007 05:23:11.845278   12643 main.go:141] libmachine: Creating Disk image...
	I1007 05:23:11.845292   12643 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:23:11.845479   12643 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/test-preload-706000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/test-preload-706000/disk.qcow2
	I1007 05:23:11.855440   12643 main.go:141] libmachine: STDOUT: 
	I1007 05:23:11.855504   12643 main.go:141] libmachine: STDERR: 
	I1007 05:23:11.855571   12643 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/test-preload-706000/disk.qcow2 +20000M
	I1007 05:23:11.864283   12643 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:23:11.864299   12643 main.go:141] libmachine: STDERR: 
	I1007 05:23:11.864313   12643 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/test-preload-706000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/test-preload-706000/disk.qcow2
	I1007 05:23:11.864318   12643 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:23:11.864329   12643 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:23:11.864366   12643 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/test-preload-706000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/test-preload-706000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/test-preload-706000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:6b:fc:79:2c:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/test-preload-706000/disk.qcow2
	I1007 05:23:11.866237   12643 main.go:141] libmachine: STDOUT: 
	I1007 05:23:11.866251   12643 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:23:11.866263   12643 client.go:171] duration metric: took 229.963625ms to LocalClient.Create
	I1007 05:23:13.361187   12643 cache.go:157] /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I1007 05:23:13.361262   12643 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.092593917s
	I1007 05:23:13.361288   12643 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I1007 05:23:13.361333   12643 cache.go:87] Successfully saved all images to host disk.
	I1007 05:23:13.868407   12643 start.go:128] duration metric: took 2.281509708s to createHost
	I1007 05:23:13.868452   12643 start.go:83] releasing machines lock for "test-preload-706000", held for 2.28189825s
	W1007 05:23:13.868706   12643 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-706000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-706000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:23:13.882335   12643 out.go:201] 
	W1007 05:23:13.887413   12643 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:23:13.887439   12643 out.go:270] * 
	* 
	W1007 05:23:13.889894   12643 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:23:13.899305   12643 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-706000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-10-07 05:23:13.915105 -0700 PDT m=+671.200540460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-706000 -n test-preload-706000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-706000 -n test-preload-706000: exit status 7 (72.960042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-706000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-706000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-706000
--- FAIL: TestPreload (9.92s)

                                                
                                    
x
+
TestScheduledStopUnix (9.88s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-119000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-119000 --memory=2048 --driver=qemu2 : exit status 80 (9.727405375s)

                                                
                                                
-- stdout --
	* [scheduled-stop-119000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-119000" primary control-plane node in "scheduled-stop-119000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-119000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-119000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-119000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-119000" primary control-plane node in "scheduled-stop-119000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-119000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-119000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-10-07 05:23:23.796153 -0700 PDT m=+681.081771210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-119000 -n scheduled-stop-119000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-119000 -n scheduled-stop-119000: exit status 7 (74.37425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-119000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-119000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-119000
--- FAIL: TestScheduledStopUnix (9.88s)

                                                
                                    
x
+
TestSkaffold (16.22s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe924193872 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe924193872 version: (1.053382333s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-640000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-640000 --memory=2600 --driver=qemu2 : exit status 80 (9.742815833s)

                                                
                                                
-- stdout --
	* [skaffold-640000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-640000" primary control-plane node in "skaffold-640000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-640000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-640000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-640000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-640000" primary control-plane node in "skaffold-640000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-640000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-640000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-10-07 05:23:40.019638 -0700 PDT m=+697.305556085
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-640000 -n skaffold-640000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-640000 -n skaffold-640000: exit status 7 (68.665583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-640000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-640000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-640000
--- FAIL: TestSkaffold (16.22s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (622.1s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1382625489 start -p running-upgrade-494000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1382625489 start -p running-upgrade-494000 --memory=2200 --vm-driver=qemu2 : (1m13.715652291s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-494000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-494000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m31.149441375s)

                                                
                                                
-- stdout --
	* [running-upgrade-494000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-494000" primary control-plane node in "running-upgrade-494000" cluster
	* Updating the running qemu2 "running-upgrade-494000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:25:39.491581   13060 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:25:39.491719   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:25:39.491723   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:25:39.491725   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:25:39.491855   13060 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:25:39.492894   13060 out.go:352] Setting JSON to false
	I1007 05:25:39.511211   13060 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6910,"bootTime":1728297029,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:25:39.511304   13060 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:25:39.516572   13060 out.go:177] * [running-upgrade-494000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:25:39.524638   13060 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:25:39.524683   13060 notify.go:220] Checking for updates...
	I1007 05:25:39.532542   13060 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:25:39.536601   13060 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:25:39.539575   13060 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:25:39.542596   13060 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:25:39.545576   13060 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:25:39.548898   13060 config.go:182] Loaded profile config "running-upgrade-494000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:25:39.552489   13060 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1007 05:25:39.555543   13060 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:25:39.559470   13060 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 05:25:39.566572   13060 start.go:297] selected driver: qemu2
	I1007 05:25:39.566579   13060 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-494000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52242 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-494000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1007 05:25:39.566631   13060 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:25:39.568999   13060 cni.go:84] Creating CNI manager for ""
	I1007 05:25:39.569033   13060 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:25:39.569057   13060 start.go:340] cluster config:
	{Name:running-upgrade-494000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52242 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-494000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1007 05:25:39.569109   13060 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:25:39.577638   13060 out.go:177] * Starting "running-upgrade-494000" primary control-plane node in "running-upgrade-494000" cluster
	I1007 05:25:39.580553   13060 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1007 05:25:39.580573   13060 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1007 05:25:39.580582   13060 cache.go:56] Caching tarball of preloaded images
	I1007 05:25:39.580653   13060 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:25:39.580659   13060 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1007 05:25:39.580719   13060 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/running-upgrade-494000/config.json ...
	I1007 05:25:39.581133   13060 start.go:360] acquireMachinesLock for running-upgrade-494000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:25:39.581162   13060 start.go:364] duration metric: took 23.333µs to acquireMachinesLock for "running-upgrade-494000"
	I1007 05:25:39.581171   13060 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:25:39.581175   13060 fix.go:54] fixHost starting: 
	I1007 05:25:39.581772   13060 fix.go:112] recreateIfNeeded on running-upgrade-494000: state=Running err=<nil>
	W1007 05:25:39.581782   13060 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:25:39.586553   13060 out.go:177] * Updating the running qemu2 "running-upgrade-494000" VM ...
	I1007 05:25:39.594576   13060 machine.go:93] provisionDockerMachine start ...
	I1007 05:25:39.594625   13060 main.go:141] libmachine: Using SSH client type: native
	I1007 05:25:39.594734   13060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049721f0] 0x104974a30 <nil>  [] 0s} localhost 52210 <nil> <nil>}
	I1007 05:25:39.594739   13060 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 05:25:39.645055   13060 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-494000
	
	I1007 05:25:39.645069   13060 buildroot.go:166] provisioning hostname "running-upgrade-494000"
	I1007 05:25:39.645136   13060 main.go:141] libmachine: Using SSH client type: native
	I1007 05:25:39.645248   13060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049721f0] 0x104974a30 <nil>  [] 0s} localhost 52210 <nil> <nil>}
	I1007 05:25:39.645255   13060 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-494000 && echo "running-upgrade-494000" | sudo tee /etc/hostname
	I1007 05:25:39.698374   13060 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-494000
	
	I1007 05:25:39.698448   13060 main.go:141] libmachine: Using SSH client type: native
	I1007 05:25:39.698562   13060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049721f0] 0x104974a30 <nil>  [] 0s} localhost 52210 <nil> <nil>}
	I1007 05:25:39.698572   13060 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-494000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-494000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-494000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 05:25:39.750071   13060 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 05:25:39.750084   13060 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18424-10771/.minikube CaCertPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18424-10771/.minikube}
	I1007 05:25:39.750093   13060 buildroot.go:174] setting up certificates
	I1007 05:25:39.750106   13060 provision.go:84] configureAuth start
	I1007 05:25:39.750111   13060 provision.go:143] copyHostCerts
	I1007 05:25:39.750188   13060 exec_runner.go:144] found /Users/jenkins/minikube-integration/18424-10771/.minikube/ca.pem, removing ...
	I1007 05:25:39.750195   13060 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18424-10771/.minikube/ca.pem
	I1007 05:25:39.750311   13060 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18424-10771/.minikube/ca.pem (1082 bytes)
	I1007 05:25:39.750499   13060 exec_runner.go:144] found /Users/jenkins/minikube-integration/18424-10771/.minikube/cert.pem, removing ...
	I1007 05:25:39.750502   13060 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18424-10771/.minikube/cert.pem
	I1007 05:25:39.750546   13060 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18424-10771/.minikube/cert.pem (1123 bytes)
	I1007 05:25:39.750653   13060 exec_runner.go:144] found /Users/jenkins/minikube-integration/18424-10771/.minikube/key.pem, removing ...
	I1007 05:25:39.750657   13060 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18424-10771/.minikube/key.pem
	I1007 05:25:39.750694   13060 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18424-10771/.minikube/key.pem (1675 bytes)
	I1007 05:25:39.750782   13060 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-494000 san=[127.0.0.1 localhost minikube running-upgrade-494000]
	I1007 05:25:39.837039   13060 provision.go:177] copyRemoteCerts
	I1007 05:25:39.837086   13060 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 05:25:39.837096   13060 sshutil.go:53] new ssh client: &{IP:localhost Port:52210 SSHKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/running-upgrade-494000/id_rsa Username:docker}
	I1007 05:25:39.864269   13060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 05:25:39.872889   13060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1007 05:25:39.879702   13060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 05:25:39.888240   13060 provision.go:87] duration metric: took 138.123125ms to configureAuth
	I1007 05:25:39.888250   13060 buildroot.go:189] setting minikube options for container-runtime
	I1007 05:25:39.888362   13060 config.go:182] Loaded profile config "running-upgrade-494000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:25:39.888404   13060 main.go:141] libmachine: Using SSH client type: native
	I1007 05:25:39.888489   13060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049721f0] 0x104974a30 <nil>  [] 0s} localhost 52210 <nil> <nil>}
	I1007 05:25:39.888494   13060 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1007 05:25:39.938562   13060 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1007 05:25:39.938571   13060 buildroot.go:70] root file system type: tmpfs
	I1007 05:25:39.938624   13060 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1007 05:25:39.938688   13060 main.go:141] libmachine: Using SSH client type: native
	I1007 05:25:39.938790   13060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049721f0] 0x104974a30 <nil>  [] 0s} localhost 52210 <nil> <nil>}
	I1007 05:25:39.938823   13060 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1007 05:25:39.990429   13060 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1007 05:25:39.990487   13060 main.go:141] libmachine: Using SSH client type: native
	I1007 05:25:39.990590   13060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049721f0] 0x104974a30 <nil>  [] 0s} localhost 52210 <nil> <nil>}
	I1007 05:25:39.990597   13060 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1007 05:25:40.041412   13060 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 05:25:40.041427   13060 machine.go:96] duration metric: took 446.852875ms to provisionDockerMachine
	I1007 05:25:40.041433   13060 start.go:293] postStartSetup for "running-upgrade-494000" (driver="qemu2")
	I1007 05:25:40.041440   13060 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 05:25:40.041504   13060 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 05:25:40.041513   13060 sshutil.go:53] new ssh client: &{IP:localhost Port:52210 SSHKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/running-upgrade-494000/id_rsa Username:docker}
	I1007 05:25:40.067574   13060 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 05:25:40.069253   13060 info.go:137] Remote host: Buildroot 2021.02.12
	I1007 05:25:40.069261   13060 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18424-10771/.minikube/addons for local assets ...
	I1007 05:25:40.069327   13060 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18424-10771/.minikube/files for local assets ...
	I1007 05:25:40.069427   13060 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18424-10771/.minikube/files/etc/ssl/certs/112842.pem -> 112842.pem in /etc/ssl/certs
	I1007 05:25:40.069538   13060 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 05:25:40.073179   13060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/files/etc/ssl/certs/112842.pem --> /etc/ssl/certs/112842.pem (1708 bytes)
	I1007 05:25:40.079905   13060 start.go:296] duration metric: took 38.466125ms for postStartSetup
	I1007 05:25:40.079919   13060 fix.go:56] duration metric: took 498.754084ms for fixHost
	I1007 05:25:40.079967   13060 main.go:141] libmachine: Using SSH client type: native
	I1007 05:25:40.080081   13060 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1049721f0] 0x104974a30 <nil>  [] 0s} localhost 52210 <nil> <nil>}
	I1007 05:25:40.080088   13060 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 05:25:40.130363   13060 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728303939.783424680
	
	I1007 05:25:40.130374   13060 fix.go:216] guest clock: 1728303939.783424680
	I1007 05:25:40.130382   13060 fix.go:229] Guest: 2024-10-07 05:25:39.78342468 -0700 PDT Remote: 2024-10-07 05:25:40.079922 -0700 PDT m=+0.610454668 (delta=-296.49732ms)
	I1007 05:25:40.130394   13060 fix.go:200] guest clock delta is within tolerance: -296.49732ms
	I1007 05:25:40.130396   13060 start.go:83] releasing machines lock for "running-upgrade-494000", held for 549.240542ms
	I1007 05:25:40.130474   13060 ssh_runner.go:195] Run: cat /version.json
	I1007 05:25:40.130483   13060 sshutil.go:53] new ssh client: &{IP:localhost Port:52210 SSHKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/running-upgrade-494000/id_rsa Username:docker}
	I1007 05:25:40.130476   13060 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 05:25:40.130505   13060 sshutil.go:53] new ssh client: &{IP:localhost Port:52210 SSHKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/running-upgrade-494000/id_rsa Username:docker}
	W1007 05:25:40.156011   13060 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1007 05:25:40.156075   13060 ssh_runner.go:195] Run: systemctl --version
	I1007 05:25:40.158041   13060 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 05:25:40.160351   13060 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 05:25:40.160400   13060 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1007 05:25:40.168418   13060 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1007 05:25:40.173306   13060 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 05:25:40.173314   13060 start.go:495] detecting cgroup driver to use...
	I1007 05:25:40.173416   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 05:25:40.178813   13060 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1007 05:25:40.181730   13060 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1007 05:25:40.184573   13060 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1007 05:25:40.184608   13060 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1007 05:25:40.189117   13060 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1007 05:25:40.192131   13060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1007 05:25:40.195189   13060 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1007 05:25:40.198582   13060 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 05:25:40.201422   13060 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1007 05:25:40.204420   13060 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1007 05:25:40.207363   13060 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1007 05:25:40.210123   13060 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 05:25:40.212744   13060 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 05:25:40.215964   13060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:25:40.307105   13060 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1007 05:25:40.314848   13060 start.go:495] detecting cgroup driver to use...
	I1007 05:25:40.314935   13060 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1007 05:25:40.322790   13060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 05:25:40.327694   13060 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 05:25:40.338324   13060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 05:25:40.383614   13060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1007 05:25:40.388216   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 05:25:40.393459   13060 ssh_runner.go:195] Run: which cri-dockerd
	I1007 05:25:40.394745   13060 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1007 05:25:40.397452   13060 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1007 05:25:40.402793   13060 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1007 05:25:40.501006   13060 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1007 05:25:40.595734   13060 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1007 05:25:40.595787   13060 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1007 05:25:40.601573   13060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:25:40.695450   13060 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1007 05:25:43.648551   13060 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.9531315s)
	I1007 05:25:43.648630   13060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1007 05:25:43.653587   13060 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1007 05:25:43.660759   13060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1007 05:25:43.666152   13060 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1007 05:25:43.766190   13060 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1007 05:25:43.849934   13060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:25:43.935731   13060 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1007 05:25:43.942029   13060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1007 05:25:43.946798   13060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:25:44.030432   13060 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1007 05:25:44.070665   13060 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1007 05:25:44.070753   13060 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1007 05:25:44.072765   13060 start.go:563] Will wait 60s for crictl version
	I1007 05:25:44.072811   13060 ssh_runner.go:195] Run: which crictl
	I1007 05:25:44.074344   13060 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 05:25:44.085635   13060 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1007 05:25:44.085713   13060 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1007 05:25:44.098734   13060 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1007 05:25:44.118986   13060 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1007 05:25:44.119129   13060 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1007 05:25:44.120438   13060 kubeadm.go:883] updating cluster {Name:running-upgrade-494000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52242 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-494000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1007 05:25:44.120481   13060 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1007 05:25:44.120528   13060 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1007 05:25:44.131061   13060 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1007 05:25:44.131073   13060 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1007 05:25:44.131133   13060 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1007 05:25:44.134099   13060 ssh_runner.go:195] Run: which lz4
	I1007 05:25:44.135325   13060 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 05:25:44.136602   13060 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 05:25:44.136612   13060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1007 05:25:45.127303   13060 docker.go:649] duration metric: took 992.04425ms to copy over tarball
	I1007 05:25:45.127375   13060 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 05:25:46.244789   13060 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.11742s)
	I1007 05:25:46.244802   13060 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 05:25:46.260696   13060 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1007 05:25:46.264045   13060 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1007 05:25:46.269233   13060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:25:46.345460   13060 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1007 05:25:47.518824   13060 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.173369958s)
	I1007 05:25:47.518916   13060 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1007 05:25:47.529975   13060 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1007 05:25:47.529996   13060 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1007 05:25:47.530000   13060 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1007 05:25:47.535302   13060 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:25:47.537644   13060 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:25:47.539871   13060 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:25:47.539901   13060 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:25:47.542034   13060 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:25:47.542309   13060 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:25:47.543548   13060 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:25:47.543884   13060 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:25:47.544939   13060 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:25:47.544948   13060 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1007 05:25:47.545849   13060 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:25:47.546503   13060 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1007 05:25:47.547230   13060 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:25:47.547453   13060 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1007 05:25:47.548526   13060 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1007 05:25:47.549133   13060 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:25:48.031000   13060 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:25:48.042195   13060 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1007 05:25:48.042222   13060 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:25:48.042280   13060 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:25:48.054978   13060 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1007 05:25:48.078679   13060 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:25:48.089833   13060 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1007 05:25:48.089866   13060 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:25:48.089929   13060 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:25:48.102845   13060 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:25:48.103099   13060 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1007 05:25:48.114737   13060 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1007 05:25:48.114759   13060 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:25:48.114819   13060 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:25:48.128392   13060 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1007 05:25:48.143095   13060 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:25:48.154448   13060 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1007 05:25:48.154472   13060 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:25:48.154532   13060 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:25:48.164979   13060 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1007 05:25:48.246246   13060 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1007 05:25:48.259678   13060 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1007 05:25:48.259705   13060 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1007 05:25:48.259772   13060 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1007 05:25:48.264516   13060 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1007 05:25:48.273844   13060 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1007 05:25:48.273987   13060 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1007 05:25:48.277618   13060 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1007 05:25:48.277641   13060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I1007 05:25:48.277854   13060 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1007 05:25:48.277920   13060 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1007 05:25:48.278054   13060 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1007 05:25:48.297060   13060 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1007 05:25:48.297221   13060 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1007 05:25:48.308129   13060 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1007 05:25:48.308155   13060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1007 05:25:48.329427   13060 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1007 05:25:48.329443   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	W1007 05:25:48.364784   13060 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1007 05:25:48.364941   13060 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:25:48.390659   13060 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1007 05:25:48.390979   13060 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1007 05:25:48.391100   13060 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:25:48.397917   13060 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1007 05:25:48.397939   13060 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:25:48.398009   13060 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:25:48.438890   13060 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1007 05:25:48.438918   13060 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:25:48.438977   13060 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:25:48.467297   13060 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1007 05:25:48.467461   13060 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1007 05:25:48.490832   13060 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1007 05:25:48.490873   13060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1007 05:25:48.491055   13060 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1007 05:25:48.491189   13060 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1007 05:25:48.504504   13060 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1007 05:25:48.504533   13060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1007 05:25:48.606847   13060 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1007 05:25:48.606863   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1007 05:25:48.911995   13060 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1007 05:25:48.912019   13060 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1007 05:25:48.912033   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1007 05:25:48.987943   13060 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1007 05:25:48.987963   13060 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1007 05:25:48.987973   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1007 05:25:49.200180   13060 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1007 05:25:49.200220   13060 cache_images.go:92] duration metric: took 1.670243458s to LoadCachedImages
	W1007 05:25:49.200262   13060 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I1007 05:25:49.200267   13060 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1007 05:25:49.200321   13060 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-494000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-494000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 05:25:49.200396   13060 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1007 05:25:49.232866   13060 cni.go:84] Creating CNI manager for ""
	I1007 05:25:49.232887   13060 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:25:49.232913   13060 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 05:25:49.232922   13060 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-494000 NodeName:running-upgrade-494000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 05:25:49.232995   13060 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-494000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 05:25:49.233061   13060 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1007 05:25:49.241919   13060 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 05:25:49.241992   13060 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 05:25:49.245598   13060 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1007 05:25:49.250668   13060 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 05:25:49.255799   13060 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1007 05:25:49.260928   13060 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1007 05:25:49.262398   13060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:25:49.350143   13060 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 05:25:49.355758   13060 certs.go:68] Setting up /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/running-upgrade-494000 for IP: 10.0.2.15
	I1007 05:25:49.355764   13060 certs.go:194] generating shared ca certs ...
	I1007 05:25:49.355772   13060 certs.go:226] acquiring lock for ca certs: {Name:mkc7f2d51afe66903c603984849255f5d4b47504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:25:49.356052   13060 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/ca.key
	I1007 05:25:49.356112   13060 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/proxy-client-ca.key
	I1007 05:25:49.356119   13060 certs.go:256] generating profile certs ...
	I1007 05:25:49.356203   13060 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/running-upgrade-494000/client.key
	I1007 05:25:49.356214   13060 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/running-upgrade-494000/apiserver.key.17f2aa8a
	I1007 05:25:49.356223   13060 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/running-upgrade-494000/apiserver.crt.17f2aa8a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1007 05:25:49.470440   13060 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/running-upgrade-494000/apiserver.crt.17f2aa8a ...
	I1007 05:25:49.470448   13060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/running-upgrade-494000/apiserver.crt.17f2aa8a: {Name:mk7ee59c1c8f33e65fcc4404237a328d45364b78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:25:49.470741   13060 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/running-upgrade-494000/apiserver.key.17f2aa8a ...
	I1007 05:25:49.470747   13060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/running-upgrade-494000/apiserver.key.17f2aa8a: {Name:mkb5692778197798184577967743eac9723cca07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:25:49.470908   13060 certs.go:381] copying /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/running-upgrade-494000/apiserver.crt.17f2aa8a -> /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/running-upgrade-494000/apiserver.crt
	I1007 05:25:49.471046   13060 certs.go:385] copying /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/running-upgrade-494000/apiserver.key.17f2aa8a -> /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/running-upgrade-494000/apiserver.key
	I1007 05:25:49.471204   13060 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/running-upgrade-494000/proxy-client.key
	I1007 05:25:49.471359   13060 certs.go:484] found cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/11284.pem (1338 bytes)
	W1007 05:25:49.471396   13060 certs.go:480] ignoring /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/11284_empty.pem, impossibly tiny 0 bytes
	I1007 05:25:49.471404   13060 certs.go:484] found cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 05:25:49.471436   13060 certs.go:484] found cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem (1082 bytes)
	I1007 05:25:49.471469   13060 certs.go:484] found cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem (1123 bytes)
	I1007 05:25:49.471499   13060 certs.go:484] found cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/key.pem (1675 bytes)
	I1007 05:25:49.471561   13060 certs.go:484] found cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/files/etc/ssl/certs/112842.pem (1708 bytes)
	I1007 05:25:49.472032   13060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 05:25:49.479898   13060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 05:25:49.487501   13060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 05:25:49.495892   13060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 05:25:49.505613   13060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/running-upgrade-494000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1007 05:25:49.511932   13060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/running-upgrade-494000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 05:25:49.519607   13060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/running-upgrade-494000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 05:25:49.530640   13060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/running-upgrade-494000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 05:25:49.537006   13060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 05:25:49.543548   13060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/11284.pem --> /usr/share/ca-certificates/11284.pem (1338 bytes)
	I1007 05:25:49.554803   13060 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/files/etc/ssl/certs/112842.pem --> /usr/share/ca-certificates/112842.pem (1708 bytes)
	I1007 05:25:49.560985   13060 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 05:25:49.565955   13060 ssh_runner.go:195] Run: openssl version
	I1007 05:25:49.569804   13060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11284.pem && ln -fs /usr/share/ca-certificates/11284.pem /etc/ssl/certs/11284.pem"
	I1007 05:25:49.580472   13060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11284.pem
	I1007 05:25:49.582073   13060 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:13 /usr/share/ca-certificates/11284.pem
	I1007 05:25:49.582098   13060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11284.pem
	I1007 05:25:49.583909   13060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11284.pem /etc/ssl/certs/51391683.0"
	I1007 05:25:49.586621   13060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112842.pem && ln -fs /usr/share/ca-certificates/112842.pem /etc/ssl/certs/112842.pem"
	I1007 05:25:49.589482   13060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112842.pem
	I1007 05:25:49.591003   13060 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:13 /usr/share/ca-certificates/112842.pem
	I1007 05:25:49.591029   13060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112842.pem
	I1007 05:25:49.592885   13060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112842.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 05:25:49.595652   13060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 05:25:49.598579   13060 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 05:25:49.600006   13060 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I1007 05:25:49.600035   13060 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 05:25:49.601849   13060 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 05:25:49.604721   13060 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 05:25:49.606262   13060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 05:25:49.608121   13060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 05:25:49.610028   13060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 05:25:49.611877   13060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 05:25:49.613896   13060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 05:25:49.615690   13060 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 05:25:49.617509   13060 kubeadm.go:392] StartCluster: {Name:running-upgrade-494000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52242 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-494000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1007 05:25:49.617578   13060 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1007 05:25:49.631427   13060 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 05:25:49.634641   13060 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 05:25:49.634652   13060 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 05:25:49.634681   13060 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 05:25:49.637397   13060 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 05:25:49.637429   13060 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-494000" does not appear in /Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:25:49.637447   13060 kubeconfig.go:62] /Users/jenkins/minikube-integration/18424-10771/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-494000" cluster setting kubeconfig missing "running-upgrade-494000" context setting]
	I1007 05:25:49.637645   13060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/kubeconfig: {Name:mkfa460adb077498749c83f32a682247504db19f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:25:49.638726   13060 kapi.go:59] client config for running-upgrade-494000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/running-upgrade-494000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/running-upgrade-494000/client.key", CAFile:"/Users/jenkins/minikube-integration/18424-10771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1063c7ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 05:25:49.639704   13060 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 05:25:49.642389   13060 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-494000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1007 05:25:49.642396   13060 kubeadm.go:1160] stopping kube-system containers ...
	I1007 05:25:49.642441   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1007 05:25:49.659059   13060 docker.go:483] Stopping containers: [349aeab911b4 ce16ff8feae8 e7e87788af86 dd78a53f996f 6d50f7dd28e6 978dd376215f 03dfc3c662d8 89902f34c603 cff18c2b3cd6 f72b81e9f3f3 5460b0f8a17b 8269cd6065ba cacd690a19c2 d4a40a7637f2 23750bce97ae 776f557fa645 ba1d61b61eb0]
	I1007 05:25:49.659132   13060 ssh_runner.go:195] Run: docker stop 349aeab911b4 ce16ff8feae8 e7e87788af86 dd78a53f996f 6d50f7dd28e6 978dd376215f 03dfc3c662d8 89902f34c603 cff18c2b3cd6 f72b81e9f3f3 5460b0f8a17b 8269cd6065ba cacd690a19c2 d4a40a7637f2 23750bce97ae 776f557fa645 ba1d61b61eb0
	I1007 05:25:49.851493   13060 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1007 05:25:49.924515   13060 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 05:25:49.928687   13060 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Oct  7 12:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Oct  7 12:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Oct  7 12:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Oct  7 12:25 /etc/kubernetes/scheduler.conf
	
	I1007 05:25:49.928724   13060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52242 /etc/kubernetes/admin.conf
	I1007 05:25:49.932077   13060 kubeadm.go:163] "https://control-plane.minikube.internal:52242" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52242 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1007 05:25:49.932109   13060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 05:25:49.935324   13060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52242 /etc/kubernetes/kubelet.conf
	I1007 05:25:49.938294   13060 kubeadm.go:163] "https://control-plane.minikube.internal:52242" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52242 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1007 05:25:49.938335   13060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 05:25:49.941156   13060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52242 /etc/kubernetes/controller-manager.conf
	I1007 05:25:49.944111   13060 kubeadm.go:163] "https://control-plane.minikube.internal:52242" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52242 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1007 05:25:49.944145   13060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 05:25:49.947302   13060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52242 /etc/kubernetes/scheduler.conf
	I1007 05:25:49.949928   13060 kubeadm.go:163] "https://control-plane.minikube.internal:52242" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52242 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1007 05:25:49.949956   13060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 05:25:49.952636   13060 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 05:25:49.955948   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:25:49.985671   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:25:50.728368   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:25:50.924193   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:25:50.955628   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:25:50.982825   13060 api_server.go:52] waiting for apiserver process to appear ...
	I1007 05:25:50.982916   13060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 05:25:51.485327   13060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 05:25:51.985284   13060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 05:25:52.484187   13060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 05:25:52.488394   13060 api_server.go:72] duration metric: took 1.505599416s to wait for apiserver process to appear ...
	I1007 05:25:52.488420   13060 api_server.go:88] waiting for apiserver healthz status ...
	I1007 05:25:52.488441   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:25:57.490516   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:25:57.490557   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:26:02.490838   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:26:02.490929   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:26:07.491863   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:26:07.491931   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:26:12.492637   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:26:12.492689   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:26:17.493667   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:26:17.493709   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:26:22.495033   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:26:22.495141   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:26:27.496665   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:26:27.496773   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:26:32.499410   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:26:32.499505   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:26:37.502209   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:26:37.502308   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:26:42.504948   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:26:42.505003   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:26:47.507424   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:26:47.507519   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:26:52.509731   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:26:52.510374   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:26:52.554498   13060 logs.go:282] 2 containers: [eba0cb217bc2 8269cd6065ba]
	I1007 05:26:52.554666   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:26:52.574891   13060 logs.go:282] 2 containers: [dcb43861e68c 89902f34c603]
	I1007 05:26:52.575015   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:26:52.589671   13060 logs.go:282] 1 containers: [1a6f70326ce2]
	I1007 05:26:52.589766   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:26:52.606157   13060 logs.go:282] 2 containers: [a7744cef9eab 349aeab911b4]
	I1007 05:26:52.606252   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:26:52.617090   13060 logs.go:282] 1 containers: [aba6d1211c82]
	I1007 05:26:52.617177   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:26:52.627700   13060 logs.go:282] 2 containers: [a035f9be42f8 cff18c2b3cd6]
	I1007 05:26:52.627771   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:26:52.638469   13060 logs.go:282] 0 containers: []
	W1007 05:26:52.638481   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:26:52.638555   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:26:52.651536   13060 logs.go:282] 2 containers: [574be28e75fb 94aef91365df]
	I1007 05:26:52.651568   13060 logs.go:123] Gathering logs for kube-scheduler [349aeab911b4] ...
	I1007 05:26:52.651573   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349aeab911b4"
	I1007 05:26:52.664074   13060 logs.go:123] Gathering logs for storage-provisioner [574be28e75fb] ...
	I1007 05:26:52.664087   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574be28e75fb"
	I1007 05:26:52.676106   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:26:52.676119   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:26:52.689156   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:26:52.689166   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:26:52.714836   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:26:52.714845   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:26:52.753527   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:26:52.753622   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:26:52.753963   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:26:52.753967   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:26:52.758109   13060 logs.go:123] Gathering logs for etcd [89902f34c603] ...
	I1007 05:26:52.758117   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89902f34c603"
	I1007 05:26:52.779937   13060 logs.go:123] Gathering logs for coredns [1a6f70326ce2] ...
	I1007 05:26:52.779949   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6f70326ce2"
	I1007 05:26:52.791225   13060 logs.go:123] Gathering logs for kube-controller-manager [a035f9be42f8] ...
	I1007 05:26:52.791237   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a035f9be42f8"
	I1007 05:26:52.808214   13060 logs.go:123] Gathering logs for storage-provisioner [94aef91365df] ...
	I1007 05:26:52.808227   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94aef91365df"
	I1007 05:26:52.821102   13060 logs.go:123] Gathering logs for kube-apiserver [eba0cb217bc2] ...
	I1007 05:26:52.821113   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eba0cb217bc2"
	I1007 05:26:52.834821   13060 logs.go:123] Gathering logs for etcd [dcb43861e68c] ...
	I1007 05:26:52.834833   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcb43861e68c"
	I1007 05:26:52.855595   13060 logs.go:123] Gathering logs for kube-proxy [aba6d1211c82] ...
	I1007 05:26:52.855607   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aba6d1211c82"
	I1007 05:26:52.867480   13060 logs.go:123] Gathering logs for kube-controller-manager [cff18c2b3cd6] ...
	I1007 05:26:52.867493   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cff18c2b3cd6"
	I1007 05:26:52.878904   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:26:52.878917   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:26:52.949243   13060 logs.go:123] Gathering logs for kube-apiserver [8269cd6065ba] ...
	I1007 05:26:52.949258   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8269cd6065ba"
	I1007 05:26:52.974249   13060 logs.go:123] Gathering logs for kube-scheduler [a7744cef9eab] ...
	I1007 05:26:52.974260   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7744cef9eab"
	I1007 05:26:52.988275   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:26:52.988290   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:26:52.988316   13060 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 05:26:52.988322   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:26:52.988326   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:26:52.988330   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:26:52.988333   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:27:02.989425   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:27:07.989893   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:27:07.990320   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:27:08.021462   13060 logs.go:282] 2 containers: [eba0cb217bc2 8269cd6065ba]
	I1007 05:27:08.021605   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:27:08.040933   13060 logs.go:282] 2 containers: [dcb43861e68c 89902f34c603]
	I1007 05:27:08.041039   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:27:08.055069   13060 logs.go:282] 1 containers: [1a6f70326ce2]
	I1007 05:27:08.055156   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:27:08.067462   13060 logs.go:282] 2 containers: [a7744cef9eab 349aeab911b4]
	I1007 05:27:08.067548   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:27:08.077771   13060 logs.go:282] 1 containers: [aba6d1211c82]
	I1007 05:27:08.077843   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:27:08.088244   13060 logs.go:282] 2 containers: [a035f9be42f8 cff18c2b3cd6]
	I1007 05:27:08.088321   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:27:08.102568   13060 logs.go:282] 0 containers: []
	W1007 05:27:08.102588   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:27:08.102656   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:27:08.113982   13060 logs.go:282] 2 containers: [574be28e75fb 94aef91365df]
	I1007 05:27:08.113998   13060 logs.go:123] Gathering logs for kube-proxy [aba6d1211c82] ...
	I1007 05:27:08.114003   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aba6d1211c82"
	I1007 05:27:08.125977   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:27:08.125989   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:27:08.164972   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:27:08.165064   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:27:08.165401   13060 logs.go:123] Gathering logs for kube-scheduler [349aeab911b4] ...
	I1007 05:27:08.165409   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349aeab911b4"
	I1007 05:27:08.177434   13060 logs.go:123] Gathering logs for etcd [dcb43861e68c] ...
	I1007 05:27:08.177445   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcb43861e68c"
	I1007 05:27:08.191363   13060 logs.go:123] Gathering logs for storage-provisioner [94aef91365df] ...
	I1007 05:27:08.191374   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94aef91365df"
	I1007 05:27:08.202614   13060 logs.go:123] Gathering logs for storage-provisioner [574be28e75fb] ...
	I1007 05:27:08.202626   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574be28e75fb"
	I1007 05:27:08.215617   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:27:08.215631   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:27:08.240819   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:27:08.240828   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:27:08.245448   13060 logs.go:123] Gathering logs for kube-scheduler [a7744cef9eab] ...
	I1007 05:27:08.245456   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7744cef9eab"
	I1007 05:27:08.257436   13060 logs.go:123] Gathering logs for kube-apiserver [8269cd6065ba] ...
	I1007 05:27:08.257444   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8269cd6065ba"
	I1007 05:27:08.276929   13060 logs.go:123] Gathering logs for etcd [89902f34c603] ...
	I1007 05:27:08.276941   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89902f34c603"
	I1007 05:27:08.294945   13060 logs.go:123] Gathering logs for coredns [1a6f70326ce2] ...
	I1007 05:27:08.294958   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6f70326ce2"
	I1007 05:27:08.306230   13060 logs.go:123] Gathering logs for kube-controller-manager [a035f9be42f8] ...
	I1007 05:27:08.306240   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a035f9be42f8"
	I1007 05:27:08.330633   13060 logs.go:123] Gathering logs for kube-controller-manager [cff18c2b3cd6] ...
	I1007 05:27:08.330644   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cff18c2b3cd6"
	I1007 05:27:08.342127   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:27:08.342137   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:27:08.353609   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:27:08.353617   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:27:08.390898   13060 logs.go:123] Gathering logs for kube-apiserver [eba0cb217bc2] ...
	I1007 05:27:08.390907   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eba0cb217bc2"
	I1007 05:27:08.405450   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:27:08.405460   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:27:08.405484   13060 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 05:27:08.405488   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:27:08.405491   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:27:08.405494   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:27:08.405497   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:27:18.409588   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:27:23.412297   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:27:23.412784   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:27:23.453340   13060 logs.go:282] 2 containers: [eba0cb217bc2 8269cd6065ba]
	I1007 05:27:23.453490   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:27:23.475607   13060 logs.go:282] 2 containers: [dcb43861e68c 89902f34c603]
	I1007 05:27:23.475733   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:27:23.490608   13060 logs.go:282] 1 containers: [1a6f70326ce2]
	I1007 05:27:23.490681   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:27:23.503089   13060 logs.go:282] 2 containers: [a7744cef9eab 349aeab911b4]
	I1007 05:27:23.503171   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:27:23.514052   13060 logs.go:282] 1 containers: [aba6d1211c82]
	I1007 05:27:23.514119   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:27:23.525020   13060 logs.go:282] 2 containers: [a035f9be42f8 cff18c2b3cd6]
	I1007 05:27:23.525097   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:27:23.535094   13060 logs.go:282] 0 containers: []
	W1007 05:27:23.535105   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:27:23.535170   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:27:23.545918   13060 logs.go:282] 2 containers: [574be28e75fb 94aef91365df]
	I1007 05:27:23.545944   13060 logs.go:123] Gathering logs for kube-apiserver [8269cd6065ba] ...
	I1007 05:27:23.545949   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8269cd6065ba"
	I1007 05:27:23.566171   13060 logs.go:123] Gathering logs for etcd [dcb43861e68c] ...
	I1007 05:27:23.566182   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcb43861e68c"
	I1007 05:27:23.580136   13060 logs.go:123] Gathering logs for kube-scheduler [a7744cef9eab] ...
	I1007 05:27:23.580146   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7744cef9eab"
	I1007 05:27:23.591634   13060 logs.go:123] Gathering logs for kube-proxy [aba6d1211c82] ...
	I1007 05:27:23.591646   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aba6d1211c82"
	I1007 05:27:23.603039   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:27:23.603055   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:27:23.627419   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:27:23.627425   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:27:23.662478   13060 logs.go:123] Gathering logs for etcd [89902f34c603] ...
	I1007 05:27:23.662494   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89902f34c603"
	I1007 05:27:23.681498   13060 logs.go:123] Gathering logs for kube-controller-manager [a035f9be42f8] ...
	I1007 05:27:23.681508   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a035f9be42f8"
	I1007 05:27:23.700005   13060 logs.go:123] Gathering logs for kube-controller-manager [cff18c2b3cd6] ...
	I1007 05:27:23.700015   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cff18c2b3cd6"
	I1007 05:27:23.711292   13060 logs.go:123] Gathering logs for storage-provisioner [574be28e75fb] ...
	I1007 05:27:23.711306   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574be28e75fb"
	I1007 05:27:23.722680   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:27:23.722689   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:27:23.762186   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:27:23.762277   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:27:23.762609   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:27:23.762613   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:27:23.767071   13060 logs.go:123] Gathering logs for kube-apiserver [eba0cb217bc2] ...
	I1007 05:27:23.767076   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eba0cb217bc2"
	I1007 05:27:23.781388   13060 logs.go:123] Gathering logs for coredns [1a6f70326ce2] ...
	I1007 05:27:23.781404   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6f70326ce2"
	I1007 05:27:23.792932   13060 logs.go:123] Gathering logs for storage-provisioner [94aef91365df] ...
	I1007 05:27:23.792943   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94aef91365df"
	I1007 05:27:23.804311   13060 logs.go:123] Gathering logs for kube-scheduler [349aeab911b4] ...
	I1007 05:27:23.804320   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349aeab911b4"
	I1007 05:27:23.817422   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:27:23.817431   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:27:23.829297   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:27:23.829308   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:27:23.829336   13060 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 05:27:23.829341   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:27:23.829345   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:27:23.829349   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:27:23.829352   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:27:33.833158   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:27:38.835852   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:27:38.836436   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:27:38.892349   13060 logs.go:282] 2 containers: [eba0cb217bc2 8269cd6065ba]
	I1007 05:27:38.892520   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:27:38.909576   13060 logs.go:282] 2 containers: [dcb43861e68c 89902f34c603]
	I1007 05:27:38.909670   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:27:38.922856   13060 logs.go:282] 1 containers: [1a6f70326ce2]
	I1007 05:27:38.922937   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:27:38.934154   13060 logs.go:282] 2 containers: [a7744cef9eab 349aeab911b4]
	I1007 05:27:38.934235   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:27:38.944809   13060 logs.go:282] 1 containers: [aba6d1211c82]
	I1007 05:27:38.944882   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:27:38.957319   13060 logs.go:282] 2 containers: [a035f9be42f8 cff18c2b3cd6]
	I1007 05:27:38.957399   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:27:38.970660   13060 logs.go:282] 0 containers: []
	W1007 05:27:38.970676   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:27:38.970736   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:27:38.984324   13060 logs.go:282] 2 containers: [574be28e75fb 94aef91365df]
	I1007 05:27:38.984345   13060 logs.go:123] Gathering logs for kube-scheduler [349aeab911b4] ...
	I1007 05:27:38.984350   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349aeab911b4"
	I1007 05:27:38.996040   13060 logs.go:123] Gathering logs for kube-proxy [aba6d1211c82] ...
	I1007 05:27:38.996049   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aba6d1211c82"
	I1007 05:27:39.009328   13060 logs.go:123] Gathering logs for storage-provisioner [94aef91365df] ...
	I1007 05:27:39.009337   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94aef91365df"
	I1007 05:27:39.024913   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:27:39.024923   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:27:39.050652   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:27:39.050665   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:27:39.066121   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:27:39.066135   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:27:39.102732   13060 logs.go:123] Gathering logs for kube-apiserver [eba0cb217bc2] ...
	I1007 05:27:39.102746   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eba0cb217bc2"
	I1007 05:27:39.117789   13060 logs.go:123] Gathering logs for kube-controller-manager [cff18c2b3cd6] ...
	I1007 05:27:39.117799   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cff18c2b3cd6"
	I1007 05:27:39.128994   13060 logs.go:123] Gathering logs for storage-provisioner [574be28e75fb] ...
	I1007 05:27:39.129009   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574be28e75fb"
	I1007 05:27:39.140847   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:27:39.140858   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:27:39.145413   13060 logs.go:123] Gathering logs for etcd [89902f34c603] ...
	I1007 05:27:39.145423   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89902f34c603"
	I1007 05:27:39.162661   13060 logs.go:123] Gathering logs for coredns [1a6f70326ce2] ...
	I1007 05:27:39.162675   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6f70326ce2"
	I1007 05:27:39.173908   13060 logs.go:123] Gathering logs for kube-controller-manager [a035f9be42f8] ...
	I1007 05:27:39.173919   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a035f9be42f8"
	I1007 05:27:39.197386   13060 logs.go:123] Gathering logs for kube-apiserver [8269cd6065ba] ...
	I1007 05:27:39.197397   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8269cd6065ba"
	I1007 05:27:39.220328   13060 logs.go:123] Gathering logs for etcd [dcb43861e68c] ...
	I1007 05:27:39.220336   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcb43861e68c"
	I1007 05:27:39.234504   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:27:39.234513   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:27:39.274063   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:27:39.274157   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:27:39.274508   13060 logs.go:123] Gathering logs for kube-scheduler [a7744cef9eab] ...
	I1007 05:27:39.274515   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7744cef9eab"
	I1007 05:27:39.286605   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:27:39.286616   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:27:39.286644   13060 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 05:27:39.286650   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:27:39.286654   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:27:39.286657   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:27:39.286660   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:27:49.290769   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:27:54.293484   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:27:54.293678   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:27:54.305613   13060 logs.go:282] 2 containers: [eba0cb217bc2 8269cd6065ba]
	I1007 05:27:54.305694   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:27:54.316658   13060 logs.go:282] 2 containers: [dcb43861e68c 89902f34c603]
	I1007 05:27:54.316742   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:27:54.330243   13060 logs.go:282] 1 containers: [1a6f70326ce2]
	I1007 05:27:54.330319   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:27:54.341034   13060 logs.go:282] 2 containers: [a7744cef9eab 349aeab911b4]
	I1007 05:27:54.341117   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:27:54.351739   13060 logs.go:282] 1 containers: [aba6d1211c82]
	I1007 05:27:54.351809   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:27:54.362996   13060 logs.go:282] 2 containers: [a035f9be42f8 cff18c2b3cd6]
	I1007 05:27:54.363067   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:27:54.373759   13060 logs.go:282] 0 containers: []
	W1007 05:27:54.373772   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:27:54.373830   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:27:54.384640   13060 logs.go:282] 2 containers: [574be28e75fb 94aef91365df]
	I1007 05:27:54.384657   13060 logs.go:123] Gathering logs for storage-provisioner [94aef91365df] ...
	I1007 05:27:54.384664   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94aef91365df"
	I1007 05:27:54.395867   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:27:54.395879   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:27:54.432105   13060 logs.go:123] Gathering logs for kube-apiserver [8269cd6065ba] ...
	I1007 05:27:54.432118   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8269cd6065ba"
	I1007 05:27:54.451782   13060 logs.go:123] Gathering logs for kube-proxy [aba6d1211c82] ...
	I1007 05:27:54.451793   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aba6d1211c82"
	I1007 05:27:54.464154   13060 logs.go:123] Gathering logs for etcd [89902f34c603] ...
	I1007 05:27:54.464166   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89902f34c603"
	I1007 05:27:54.482625   13060 logs.go:123] Gathering logs for kube-scheduler [349aeab911b4] ...
	I1007 05:27:54.482639   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349aeab911b4"
	I1007 05:27:54.494726   13060 logs.go:123] Gathering logs for storage-provisioner [574be28e75fb] ...
	I1007 05:27:54.494742   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574be28e75fb"
	I1007 05:27:54.506913   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:27:54.506925   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:27:54.519302   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:27:54.519314   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:27:54.558811   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:27:54.558903   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:27:54.559251   13060 logs.go:123] Gathering logs for kube-controller-manager [cff18c2b3cd6] ...
	I1007 05:27:54.559254   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cff18c2b3cd6"
	I1007 05:27:54.570793   13060 logs.go:123] Gathering logs for etcd [dcb43861e68c] ...
	I1007 05:27:54.570804   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcb43861e68c"
	I1007 05:27:54.593603   13060 logs.go:123] Gathering logs for coredns [1a6f70326ce2] ...
	I1007 05:27:54.593621   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6f70326ce2"
	I1007 05:27:54.606798   13060 logs.go:123] Gathering logs for kube-scheduler [a7744cef9eab] ...
	I1007 05:27:54.606812   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7744cef9eab"
	I1007 05:27:54.621433   13060 logs.go:123] Gathering logs for kube-controller-manager [a035f9be42f8] ...
	I1007 05:27:54.621449   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a035f9be42f8"
	I1007 05:27:54.646594   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:27:54.646613   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:27:54.673022   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:27:54.673042   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:27:54.677874   13060 logs.go:123] Gathering logs for kube-apiserver [eba0cb217bc2] ...
	I1007 05:27:54.677888   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eba0cb217bc2"
	I1007 05:27:54.702416   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:27:54.702429   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:27:54.702460   13060 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 05:27:54.702465   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:27:54.702468   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:27:54.702472   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:27:54.702475   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:28:04.706436   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:28:09.708696   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:28:09.709239   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:28:09.782057   13060 logs.go:282] 2 containers: [eba0cb217bc2 8269cd6065ba]
	I1007 05:28:09.782156   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:28:09.794604   13060 logs.go:282] 2 containers: [dcb43861e68c 89902f34c603]
	I1007 05:28:09.794698   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:28:09.805376   13060 logs.go:282] 1 containers: [1a6f70326ce2]
	I1007 05:28:09.805460   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:28:09.819702   13060 logs.go:282] 2 containers: [a7744cef9eab 349aeab911b4]
	I1007 05:28:09.819775   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:28:09.830705   13060 logs.go:282] 1 containers: [aba6d1211c82]
	I1007 05:28:09.830784   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:28:09.841776   13060 logs.go:282] 2 containers: [a035f9be42f8 cff18c2b3cd6]
	I1007 05:28:09.841854   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:28:09.852199   13060 logs.go:282] 0 containers: []
	W1007 05:28:09.852211   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:28:09.852284   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:28:09.862817   13060 logs.go:282] 2 containers: [574be28e75fb 94aef91365df]
	I1007 05:28:09.862836   13060 logs.go:123] Gathering logs for storage-provisioner [94aef91365df] ...
	I1007 05:28:09.862841   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94aef91365df"
	I1007 05:28:09.874172   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:28:09.874183   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:28:09.879428   13060 logs.go:123] Gathering logs for kube-apiserver [eba0cb217bc2] ...
	I1007 05:28:09.879437   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eba0cb217bc2"
	I1007 05:28:09.901819   13060 logs.go:123] Gathering logs for etcd [dcb43861e68c] ...
	I1007 05:28:09.901835   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcb43861e68c"
	I1007 05:28:09.915878   13060 logs.go:123] Gathering logs for etcd [89902f34c603] ...
	I1007 05:28:09.915888   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89902f34c603"
	I1007 05:28:09.939777   13060 logs.go:123] Gathering logs for kube-scheduler [a7744cef9eab] ...
	I1007 05:28:09.939789   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7744cef9eab"
	I1007 05:28:09.951825   13060 logs.go:123] Gathering logs for kube-scheduler [349aeab911b4] ...
	I1007 05:28:09.951838   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349aeab911b4"
	I1007 05:28:09.963503   13060 logs.go:123] Gathering logs for kube-controller-manager [a035f9be42f8] ...
	I1007 05:28:09.963518   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a035f9be42f8"
	I1007 05:28:09.985263   13060 logs.go:123] Gathering logs for kube-apiserver [8269cd6065ba] ...
	I1007 05:28:09.985275   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8269cd6065ba"
	I1007 05:28:10.004456   13060 logs.go:123] Gathering logs for coredns [1a6f70326ce2] ...
	I1007 05:28:10.004466   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6f70326ce2"
	I1007 05:28:10.015357   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:28:10.015366   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:28:10.039589   13060 logs.go:123] Gathering logs for kube-proxy [aba6d1211c82] ...
	I1007 05:28:10.039596   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aba6d1211c82"
	I1007 05:28:10.051154   13060 logs.go:123] Gathering logs for kube-controller-manager [cff18c2b3cd6] ...
	I1007 05:28:10.051164   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cff18c2b3cd6"
	I1007 05:28:10.063049   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:28:10.063059   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:28:10.103719   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:28:10.103813   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:28:10.104166   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:28:10.104171   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:28:10.141242   13060 logs.go:123] Gathering logs for storage-provisioner [574be28e75fb] ...
	I1007 05:28:10.141252   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574be28e75fb"
	I1007 05:28:10.153026   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:28:10.153037   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:28:10.166019   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:28:10.166029   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:28:10.166057   13060 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 05:28:10.166062   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:28:10.166065   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:28:10.166068   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:28:10.166071   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:28:20.170124   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:28:25.171553   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:28:25.172202   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:28:25.213348   13060 logs.go:282] 2 containers: [eba0cb217bc2 8269cd6065ba]
	I1007 05:28:25.213509   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:28:25.235500   13060 logs.go:282] 2 containers: [dcb43861e68c 89902f34c603]
	I1007 05:28:25.235630   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:28:25.254304   13060 logs.go:282] 1 containers: [1a6f70326ce2]
	I1007 05:28:25.254391   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:28:25.266614   13060 logs.go:282] 2 containers: [a7744cef9eab 349aeab911b4]
	I1007 05:28:25.266698   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:28:25.277250   13060 logs.go:282] 1 containers: [aba6d1211c82]
	I1007 05:28:25.277324   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:28:25.288238   13060 logs.go:282] 2 containers: [a035f9be42f8 cff18c2b3cd6]
	I1007 05:28:25.288312   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:28:25.298513   13060 logs.go:282] 0 containers: []
	W1007 05:28:25.298527   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:28:25.298588   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:28:25.312048   13060 logs.go:282] 2 containers: [574be28e75fb 94aef91365df]
	I1007 05:28:25.312069   13060 logs.go:123] Gathering logs for etcd [89902f34c603] ...
	I1007 05:28:25.312074   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89902f34c603"
	I1007 05:28:25.330208   13060 logs.go:123] Gathering logs for kube-proxy [aba6d1211c82] ...
	I1007 05:28:25.330217   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aba6d1211c82"
	I1007 05:28:25.343084   13060 logs.go:123] Gathering logs for coredns [1a6f70326ce2] ...
	I1007 05:28:25.343093   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6f70326ce2"
	I1007 05:28:25.354529   13060 logs.go:123] Gathering logs for storage-provisioner [94aef91365df] ...
	I1007 05:28:25.354539   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94aef91365df"
	I1007 05:28:25.366152   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:28:25.366166   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:28:25.378003   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:28:25.378013   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:28:25.413374   13060 logs.go:123] Gathering logs for etcd [dcb43861e68c] ...
	I1007 05:28:25.413388   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcb43861e68c"
	I1007 05:28:25.428723   13060 logs.go:123] Gathering logs for kube-scheduler [a7744cef9eab] ...
	I1007 05:28:25.428733   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7744cef9eab"
	I1007 05:28:25.440797   13060 logs.go:123] Gathering logs for kube-scheduler [349aeab911b4] ...
	I1007 05:28:25.440813   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349aeab911b4"
	I1007 05:28:25.452797   13060 logs.go:123] Gathering logs for kube-apiserver [eba0cb217bc2] ...
	I1007 05:28:25.452808   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eba0cb217bc2"
	I1007 05:28:25.470798   13060 logs.go:123] Gathering logs for kube-apiserver [8269cd6065ba] ...
	I1007 05:28:25.470809   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8269cd6065ba"
	I1007 05:28:25.490206   13060 logs.go:123] Gathering logs for kube-controller-manager [a035f9be42f8] ...
	I1007 05:28:25.490215   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a035f9be42f8"
	I1007 05:28:25.509158   13060 logs.go:123] Gathering logs for kube-controller-manager [cff18c2b3cd6] ...
	I1007 05:28:25.509173   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cff18c2b3cd6"
	I1007 05:28:25.520402   13060 logs.go:123] Gathering logs for storage-provisioner [574be28e75fb] ...
	I1007 05:28:25.520414   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574be28e75fb"
	I1007 05:28:25.531900   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:28:25.531916   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:28:25.556600   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:28:25.556607   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:28:25.595741   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:28:25.595833   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:28:25.596173   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:28:25.596178   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:28:25.600513   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:28:25.600520   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:28:25.600543   13060 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 05:28:25.600554   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:28:25.600559   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:28:25.600574   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:28:25.600578   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:28:35.604601   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:28:40.606789   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:28:40.607376   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:28:40.651454   13060 logs.go:282] 2 containers: [eba0cb217bc2 8269cd6065ba]
	I1007 05:28:40.651599   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:28:40.675624   13060 logs.go:282] 2 containers: [dcb43861e68c 89902f34c603]
	I1007 05:28:40.675725   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:28:40.689755   13060 logs.go:282] 1 containers: [1a6f70326ce2]
	I1007 05:28:40.689839   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:28:40.701866   13060 logs.go:282] 2 containers: [a7744cef9eab 349aeab911b4]
	I1007 05:28:40.701946   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:28:40.712451   13060 logs.go:282] 1 containers: [aba6d1211c82]
	I1007 05:28:40.712528   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:28:40.723095   13060 logs.go:282] 2 containers: [a035f9be42f8 cff18c2b3cd6]
	I1007 05:28:40.723170   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:28:40.737748   13060 logs.go:282] 0 containers: []
	W1007 05:28:40.737767   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:28:40.737835   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:28:40.748564   13060 logs.go:282] 2 containers: [574be28e75fb 94aef91365df]
	I1007 05:28:40.748587   13060 logs.go:123] Gathering logs for kube-controller-manager [cff18c2b3cd6] ...
	I1007 05:28:40.748592   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cff18c2b3cd6"
	I1007 05:28:40.760484   13060 logs.go:123] Gathering logs for kube-apiserver [eba0cb217bc2] ...
	I1007 05:28:40.760494   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eba0cb217bc2"
	I1007 05:28:40.774714   13060 logs.go:123] Gathering logs for etcd [dcb43861e68c] ...
	I1007 05:28:40.774724   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcb43861e68c"
	I1007 05:28:40.788729   13060 logs.go:123] Gathering logs for coredns [1a6f70326ce2] ...
	I1007 05:28:40.788738   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6f70326ce2"
	I1007 05:28:40.800583   13060 logs.go:123] Gathering logs for kube-scheduler [a7744cef9eab] ...
	I1007 05:28:40.800597   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7744cef9eab"
	I1007 05:28:40.813821   13060 logs.go:123] Gathering logs for kube-scheduler [349aeab911b4] ...
	I1007 05:28:40.813830   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349aeab911b4"
	I1007 05:28:40.826689   13060 logs.go:123] Gathering logs for kube-controller-manager [a035f9be42f8] ...
	I1007 05:28:40.826705   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a035f9be42f8"
	I1007 05:28:40.844197   13060 logs.go:123] Gathering logs for storage-provisioner [574be28e75fb] ...
	I1007 05:28:40.844208   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574be28e75fb"
	I1007 05:28:40.855947   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:28:40.855957   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:28:40.867873   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:28:40.867884   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:28:40.905717   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:28:40.905813   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:28:40.906157   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:28:40.906162   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:28:40.910208   13060 logs.go:123] Gathering logs for etcd [89902f34c603] ...
	I1007 05:28:40.910215   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89902f34c603"
	I1007 05:28:40.927822   13060 logs.go:123] Gathering logs for storage-provisioner [94aef91365df] ...
	I1007 05:28:40.927834   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94aef91365df"
	I1007 05:28:40.938831   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:28:40.938841   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:28:40.961554   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:28:40.961567   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:28:41.005483   13060 logs.go:123] Gathering logs for kube-apiserver [8269cd6065ba] ...
	I1007 05:28:41.005494   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8269cd6065ba"
	I1007 05:28:41.024553   13060 logs.go:123] Gathering logs for kube-proxy [aba6d1211c82] ...
	I1007 05:28:41.024565   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aba6d1211c82"
	I1007 05:28:41.036712   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:28:41.036723   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:28:41.036748   13060 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 05:28:41.036755   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:28:41.036758   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:28:41.036761   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:28:41.036766   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:28:51.040761   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:28:56.043015   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:28:56.043123   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:28:56.065650   13060 logs.go:282] 2 containers: [eba0cb217bc2 8269cd6065ba]
	I1007 05:28:56.065731   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:28:56.080882   13060 logs.go:282] 2 containers: [dcb43861e68c 89902f34c603]
	I1007 05:28:56.080961   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:28:56.094286   13060 logs.go:282] 1 containers: [1a6f70326ce2]
	I1007 05:28:56.094370   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:28:56.105916   13060 logs.go:282] 2 containers: [a7744cef9eab 349aeab911b4]
	I1007 05:28:56.105999   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:28:56.116638   13060 logs.go:282] 1 containers: [aba6d1211c82]
	I1007 05:28:56.116719   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:28:56.127994   13060 logs.go:282] 2 containers: [a035f9be42f8 cff18c2b3cd6]
	I1007 05:28:56.128093   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:28:56.138872   13060 logs.go:282] 0 containers: []
	W1007 05:28:56.138886   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:28:56.138951   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:28:56.149892   13060 logs.go:282] 2 containers: [574be28e75fb 94aef91365df]
	I1007 05:28:56.149910   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:28:56.149916   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:28:56.154081   13060 logs.go:123] Gathering logs for kube-apiserver [eba0cb217bc2] ...
	I1007 05:28:56.154088   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eba0cb217bc2"
	I1007 05:28:56.172806   13060 logs.go:123] Gathering logs for etcd [dcb43861e68c] ...
	I1007 05:28:56.172819   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcb43861e68c"
	I1007 05:28:56.187340   13060 logs.go:123] Gathering logs for kube-controller-manager [cff18c2b3cd6] ...
	I1007 05:28:56.187350   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cff18c2b3cd6"
	I1007 05:28:56.199652   13060 logs.go:123] Gathering logs for storage-provisioner [94aef91365df] ...
	I1007 05:28:56.199662   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94aef91365df"
	I1007 05:28:56.211639   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:28:56.211651   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:28:56.252904   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:28:56.252999   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:28:56.253347   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:28:56.253353   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:28:56.289746   13060 logs.go:123] Gathering logs for etcd [89902f34c603] ...
	I1007 05:28:56.289758   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89902f34c603"
	I1007 05:28:56.308661   13060 logs.go:123] Gathering logs for kube-proxy [aba6d1211c82] ...
	I1007 05:28:56.308674   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aba6d1211c82"
	I1007 05:28:56.323422   13060 logs.go:123] Gathering logs for kube-apiserver [8269cd6065ba] ...
	I1007 05:28:56.323434   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8269cd6065ba"
	I1007 05:28:56.343054   13060 logs.go:123] Gathering logs for coredns [1a6f70326ce2] ...
	I1007 05:28:56.343065   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6f70326ce2"
	I1007 05:28:56.355098   13060 logs.go:123] Gathering logs for kube-scheduler [349aeab911b4] ...
	I1007 05:28:56.355110   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349aeab911b4"
	I1007 05:28:56.367776   13060 logs.go:123] Gathering logs for kube-controller-manager [a035f9be42f8] ...
	I1007 05:28:56.367786   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a035f9be42f8"
	I1007 05:28:56.402508   13060 logs.go:123] Gathering logs for kube-scheduler [a7744cef9eab] ...
	I1007 05:28:56.402517   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7744cef9eab"
	I1007 05:28:56.418362   13060 logs.go:123] Gathering logs for storage-provisioner [574be28e75fb] ...
	I1007 05:28:56.418373   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574be28e75fb"
	I1007 05:28:56.433679   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:28:56.433689   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:28:56.457843   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:28:56.457854   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:28:56.470174   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:28:56.470185   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:28:56.470211   13060 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 05:28:56.470216   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:28:56.470220   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:28:56.470223   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:28:56.470226   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:29:06.474168   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:11.476328   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:11.476511   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:29:11.487631   13060 logs.go:282] 2 containers: [eba0cb217bc2 8269cd6065ba]
	I1007 05:29:11.487705   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:29:11.498319   13060 logs.go:282] 2 containers: [dcb43861e68c 89902f34c603]
	I1007 05:29:11.498399   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:29:11.508691   13060 logs.go:282] 1 containers: [1a6f70326ce2]
	I1007 05:29:11.508770   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:29:11.519624   13060 logs.go:282] 2 containers: [a7744cef9eab 349aeab911b4]
	I1007 05:29:11.519716   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:29:11.530535   13060 logs.go:282] 1 containers: [aba6d1211c82]
	I1007 05:29:11.530615   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:29:11.548237   13060 logs.go:282] 2 containers: [a035f9be42f8 cff18c2b3cd6]
	I1007 05:29:11.548321   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:29:11.558626   13060 logs.go:282] 0 containers: []
	W1007 05:29:11.558638   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:29:11.558695   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:29:11.569229   13060 logs.go:282] 2 containers: [574be28e75fb 94aef91365df]
	I1007 05:29:11.569249   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:29:11.569254   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:29:11.594096   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:29:11.594111   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:29:11.606191   13060 logs.go:123] Gathering logs for etcd [dcb43861e68c] ...
	I1007 05:29:11.606202   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcb43861e68c"
	I1007 05:29:11.621770   13060 logs.go:123] Gathering logs for coredns [1a6f70326ce2] ...
	I1007 05:29:11.621784   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6f70326ce2"
	I1007 05:29:11.633331   13060 logs.go:123] Gathering logs for kube-scheduler [a7744cef9eab] ...
	I1007 05:29:11.633342   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7744cef9eab"
	I1007 05:29:11.645478   13060 logs.go:123] Gathering logs for kube-scheduler [349aeab911b4] ...
	I1007 05:29:11.645489   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349aeab911b4"
	I1007 05:29:11.664076   13060 logs.go:123] Gathering logs for etcd [89902f34c603] ...
	I1007 05:29:11.664088   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89902f34c603"
	I1007 05:29:11.689512   13060 logs.go:123] Gathering logs for storage-provisioner [574be28e75fb] ...
	I1007 05:29:11.689525   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574be28e75fb"
	I1007 05:29:11.704932   13060 logs.go:123] Gathering logs for storage-provisioner [94aef91365df] ...
	I1007 05:29:11.704947   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94aef91365df"
	I1007 05:29:11.717942   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:29:11.717953   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:29:11.758741   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:29:11.758844   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:29:11.759195   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:29:11.759203   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:29:11.763693   13060 logs.go:123] Gathering logs for kube-controller-manager [a035f9be42f8] ...
	I1007 05:29:11.763702   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a035f9be42f8"
	I1007 05:29:11.782887   13060 logs.go:123] Gathering logs for kube-controller-manager [cff18c2b3cd6] ...
	I1007 05:29:11.782899   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cff18c2b3cd6"
	I1007 05:29:11.798389   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:29:11.798403   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:29:11.833058   13060 logs.go:123] Gathering logs for kube-apiserver [eba0cb217bc2] ...
	I1007 05:29:11.833071   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eba0cb217bc2"
	I1007 05:29:11.847565   13060 logs.go:123] Gathering logs for kube-apiserver [8269cd6065ba] ...
	I1007 05:29:11.847578   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8269cd6065ba"
	I1007 05:29:11.867681   13060 logs.go:123] Gathering logs for kube-proxy [aba6d1211c82] ...
	I1007 05:29:11.867695   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aba6d1211c82"
	I1007 05:29:11.880312   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:29:11.880322   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:29:11.880348   13060 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 05:29:11.880356   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:29:11.880359   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:29:11.880364   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:29:11.880367   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:29:21.884376   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:26.886509   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:26.886700   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:29:26.898640   13060 logs.go:282] 2 containers: [eba0cb217bc2 8269cd6065ba]
	I1007 05:29:26.898723   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:29:26.909444   13060 logs.go:282] 2 containers: [dcb43861e68c 89902f34c603]
	I1007 05:29:26.909513   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:29:26.919777   13060 logs.go:282] 1 containers: [1a6f70326ce2]
	I1007 05:29:26.919859   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:29:26.930282   13060 logs.go:282] 2 containers: [a7744cef9eab 349aeab911b4]
	I1007 05:29:26.930354   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:29:26.941119   13060 logs.go:282] 1 containers: [aba6d1211c82]
	I1007 05:29:26.941204   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:29:26.951663   13060 logs.go:282] 2 containers: [a035f9be42f8 cff18c2b3cd6]
	I1007 05:29:26.951732   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:29:26.962261   13060 logs.go:282] 0 containers: []
	W1007 05:29:26.962273   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:29:26.962342   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:29:26.973542   13060 logs.go:282] 2 containers: [574be28e75fb 94aef91365df]
	I1007 05:29:26.973564   13060 logs.go:123] Gathering logs for kube-scheduler [349aeab911b4] ...
	I1007 05:29:26.973569   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349aeab911b4"
	I1007 05:29:26.985501   13060 logs.go:123] Gathering logs for storage-provisioner [574be28e75fb] ...
	I1007 05:29:26.985512   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574be28e75fb"
	I1007 05:29:26.997332   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:29:26.997341   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:29:27.008986   13060 logs.go:123] Gathering logs for kube-apiserver [eba0cb217bc2] ...
	I1007 05:29:27.009003   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eba0cb217bc2"
	I1007 05:29:27.022974   13060 logs.go:123] Gathering logs for etcd [89902f34c603] ...
	I1007 05:29:27.022988   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89902f34c603"
	I1007 05:29:27.040474   13060 logs.go:123] Gathering logs for coredns [1a6f70326ce2] ...
	I1007 05:29:27.040486   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6f70326ce2"
	I1007 05:29:27.051985   13060 logs.go:123] Gathering logs for kube-controller-manager [a035f9be42f8] ...
	I1007 05:29:27.051995   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a035f9be42f8"
	I1007 05:29:27.068984   13060 logs.go:123] Gathering logs for kube-controller-manager [cff18c2b3cd6] ...
	I1007 05:29:27.068992   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cff18c2b3cd6"
	I1007 05:29:27.080695   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:29:27.080704   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:29:27.115571   13060 logs.go:123] Gathering logs for kube-apiserver [8269cd6065ba] ...
	I1007 05:29:27.115586   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8269cd6065ba"
	I1007 05:29:27.135143   13060 logs.go:123] Gathering logs for etcd [dcb43861e68c] ...
	I1007 05:29:27.135157   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcb43861e68c"
	I1007 05:29:27.148881   13060 logs.go:123] Gathering logs for storage-provisioner [94aef91365df] ...
	I1007 05:29:27.148891   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94aef91365df"
	I1007 05:29:27.160398   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:29:27.160411   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:29:27.199041   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:29:27.199133   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:29:27.199473   13060 logs.go:123] Gathering logs for kube-scheduler [a7744cef9eab] ...
	I1007 05:29:27.199478   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7744cef9eab"
	I1007 05:29:27.211627   13060 logs.go:123] Gathering logs for kube-proxy [aba6d1211c82] ...
	I1007 05:29:27.211637   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aba6d1211c82"
	I1007 05:29:27.223488   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:29:27.223496   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:29:27.247959   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:29:27.247966   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:29:27.252546   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:29:27.252556   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:29:27.252588   13060 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 05:29:27.252593   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:29:27.252596   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:29:27.252599   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:29:27.252604   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:29:37.256627   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:42.259188   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:42.259698   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:29:42.297890   13060 logs.go:282] 2 containers: [eba0cb217bc2 8269cd6065ba]
	I1007 05:29:42.298036   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:29:42.318277   13060 logs.go:282] 2 containers: [dcb43861e68c 89902f34c603]
	I1007 05:29:42.318392   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:29:42.333698   13060 logs.go:282] 1 containers: [1a6f70326ce2]
	I1007 05:29:42.333789   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:29:42.346521   13060 logs.go:282] 2 containers: [a7744cef9eab 349aeab911b4]
	I1007 05:29:42.346602   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:29:42.358146   13060 logs.go:282] 1 containers: [aba6d1211c82]
	I1007 05:29:42.358226   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:29:42.368981   13060 logs.go:282] 2 containers: [a035f9be42f8 cff18c2b3cd6]
	I1007 05:29:42.369062   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:29:42.380727   13060 logs.go:282] 0 containers: []
	W1007 05:29:42.380740   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:29:42.380804   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:29:42.394331   13060 logs.go:282] 2 containers: [574be28e75fb 94aef91365df]
	I1007 05:29:42.394349   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:29:42.394355   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:29:42.398715   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:29:42.398722   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:29:42.433688   13060 logs.go:123] Gathering logs for etcd [dcb43861e68c] ...
	I1007 05:29:42.433701   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcb43861e68c"
	I1007 05:29:42.448245   13060 logs.go:123] Gathering logs for coredns [1a6f70326ce2] ...
	I1007 05:29:42.448255   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6f70326ce2"
	I1007 05:29:42.459609   13060 logs.go:123] Gathering logs for kube-scheduler [a7744cef9eab] ...
	I1007 05:29:42.459621   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7744cef9eab"
	I1007 05:29:42.471280   13060 logs.go:123] Gathering logs for storage-provisioner [574be28e75fb] ...
	I1007 05:29:42.471295   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574be28e75fb"
	I1007 05:29:42.482856   13060 logs.go:123] Gathering logs for kube-apiserver [eba0cb217bc2] ...
	I1007 05:29:42.482867   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eba0cb217bc2"
	I1007 05:29:42.497392   13060 logs.go:123] Gathering logs for kube-controller-manager [a035f9be42f8] ...
	I1007 05:29:42.497405   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a035f9be42f8"
	I1007 05:29:42.514642   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:29:42.514655   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:29:42.554570   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:29:42.554661   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:29:42.555016   13060 logs.go:123] Gathering logs for kube-scheduler [349aeab911b4] ...
	I1007 05:29:42.555022   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349aeab911b4"
	I1007 05:29:42.566963   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:29:42.566973   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:29:42.590946   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:29:42.590957   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:29:42.604293   13060 logs.go:123] Gathering logs for kube-apiserver [8269cd6065ba] ...
	I1007 05:29:42.604307   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8269cd6065ba"
	I1007 05:29:42.623726   13060 logs.go:123] Gathering logs for etcd [89902f34c603] ...
	I1007 05:29:42.623739   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89902f34c603"
	I1007 05:29:42.641896   13060 logs.go:123] Gathering logs for kube-proxy [aba6d1211c82] ...
	I1007 05:29:42.641910   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aba6d1211c82"
	I1007 05:29:42.654778   13060 logs.go:123] Gathering logs for kube-controller-manager [cff18c2b3cd6] ...
	I1007 05:29:42.654788   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cff18c2b3cd6"
	I1007 05:29:42.666048   13060 logs.go:123] Gathering logs for storage-provisioner [94aef91365df] ...
	I1007 05:29:42.666058   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94aef91365df"
	I1007 05:29:42.679177   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:29:42.679188   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:29:42.679216   13060 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 05:29:42.679221   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:29:42.679225   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:29:42.679229   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:29:42.679232   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:29:52.683194   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:57.685585   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:57.685807   13060 kubeadm.go:597] duration metric: took 4m8.055734083s to restartPrimaryControlPlane
	W1007 05:29:57.686019   13060 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 05:29:57.686098   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1007 05:29:58.733517   13060 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.047420334s)
	I1007 05:29:58.733589   13060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 05:29:58.738812   13060 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 05:29:58.741814   13060 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 05:29:58.744580   13060 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 05:29:58.744588   13060 kubeadm.go:157] found existing configuration files:
	
	I1007 05:29:58.744620   13060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52242 /etc/kubernetes/admin.conf
	I1007 05:29:58.746966   13060 kubeadm.go:163] "https://control-plane.minikube.internal:52242" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52242 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 05:29:58.746993   13060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 05:29:58.749947   13060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52242 /etc/kubernetes/kubelet.conf
	I1007 05:29:58.752981   13060 kubeadm.go:163] "https://control-plane.minikube.internal:52242" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52242 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 05:29:58.753011   13060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 05:29:58.755624   13060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52242 /etc/kubernetes/controller-manager.conf
	I1007 05:29:58.758159   13060 kubeadm.go:163] "https://control-plane.minikube.internal:52242" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52242 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 05:29:58.758188   13060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 05:29:58.761311   13060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52242 /etc/kubernetes/scheduler.conf
	I1007 05:29:58.764129   13060 kubeadm.go:163] "https://control-plane.minikube.internal:52242" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52242 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 05:29:58.764160   13060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 05:29:58.766719   13060 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 05:29:58.785904   13060 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1007 05:29:58.786018   13060 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 05:29:58.834880   13060 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 05:29:58.834943   13060 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 05:29:58.835010   13060 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 05:29:58.892538   13060 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 05:29:58.896528   13060 out.go:235]   - Generating certificates and keys ...
	I1007 05:29:58.896563   13060 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 05:29:58.896597   13060 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 05:29:58.896645   13060 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 05:29:58.896678   13060 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 05:29:58.896717   13060 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 05:29:58.896743   13060 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 05:29:58.896778   13060 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 05:29:58.896808   13060 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 05:29:58.896859   13060 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 05:29:58.896894   13060 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 05:29:58.896911   13060 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 05:29:58.896943   13060 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 05:29:59.400631   13060 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 05:29:59.444892   13060 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 05:29:59.494129   13060 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 05:29:59.556451   13060 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 05:29:59.585391   13060 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 05:29:59.585678   13060 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 05:29:59.585792   13060 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 05:29:59.679411   13060 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 05:29:59.683595   13060 out.go:235]   - Booting up control plane ...
	I1007 05:29:59.683637   13060 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 05:29:59.683678   13060 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 05:29:59.683785   13060 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 05:29:59.684006   13060 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 05:29:59.684990   13060 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 05:30:04.688774   13060 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.003350 seconds
	I1007 05:30:04.688879   13060 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 05:30:04.694347   13060 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 05:30:05.202980   13060 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 05:30:05.203095   13060 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-494000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 05:30:05.707870   13060 kubeadm.go:310] [bootstrap-token] Using token: okg04l.5oe8mopp6o37senu
	I1007 05:30:05.711896   13060 out.go:235]   - Configuring RBAC rules ...
	I1007 05:30:05.711966   13060 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 05:30:05.712020   13060 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 05:30:05.718701   13060 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 05:30:05.719605   13060 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 05:30:05.720591   13060 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 05:30:05.721448   13060 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 05:30:05.725831   13060 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 05:30:05.894562   13060 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 05:30:06.113640   13060 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 05:30:06.114086   13060 kubeadm.go:310] 
	I1007 05:30:06.114122   13060 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 05:30:06.114130   13060 kubeadm.go:310] 
	I1007 05:30:06.114168   13060 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 05:30:06.114237   13060 kubeadm.go:310] 
	I1007 05:30:06.114258   13060 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 05:30:06.114302   13060 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 05:30:06.114336   13060 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 05:30:06.114340   13060 kubeadm.go:310] 
	I1007 05:30:06.114366   13060 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 05:30:06.114370   13060 kubeadm.go:310] 
	I1007 05:30:06.114397   13060 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 05:30:06.114403   13060 kubeadm.go:310] 
	I1007 05:30:06.114428   13060 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 05:30:06.114517   13060 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 05:30:06.114609   13060 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 05:30:06.114615   13060 kubeadm.go:310] 
	I1007 05:30:06.114670   13060 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 05:30:06.114739   13060 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 05:30:06.114754   13060 kubeadm.go:310] 
	I1007 05:30:06.114805   13060 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token okg04l.5oe8mopp6o37senu \
	I1007 05:30:06.114876   13060 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a062c1d11feacd55c1665e5cde1180fa46a0cb1088d7ea40ca5bcc8cf3f8fe8c \
	I1007 05:30:06.114889   13060 kubeadm.go:310] 	--control-plane 
	I1007 05:30:06.114891   13060 kubeadm.go:310] 
	I1007 05:30:06.114949   13060 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 05:30:06.114955   13060 kubeadm.go:310] 
	I1007 05:30:06.114998   13060 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token okg04l.5oe8mopp6o37senu \
	I1007 05:30:06.115116   13060 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a062c1d11feacd55c1665e5cde1180fa46a0cb1088d7ea40ca5bcc8cf3f8fe8c 
	I1007 05:30:06.115202   13060 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 05:30:06.115210   13060 cni.go:84] Creating CNI manager for ""
	I1007 05:30:06.115217   13060 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:30:06.119531   13060 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 05:30:06.126643   13060 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 05:30:06.130319   13060 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 05:30:06.135426   13060 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 05:30:06.135490   13060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 05:30:06.135523   13060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-494000 minikube.k8s.io/updated_at=2024_10_07T05_30_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=running-upgrade-494000 minikube.k8s.io/primary=true
	I1007 05:30:06.140637   13060 ops.go:34] apiserver oom_adj: -16
	I1007 05:30:06.168622   13060 kubeadm.go:1113] duration metric: took 33.183417ms to wait for elevateKubeSystemPrivileges
	I1007 05:30:06.182798   13060 kubeadm.go:394] duration metric: took 4m16.57003675s to StartCluster
	I1007 05:30:06.182818   13060 settings.go:142] acquiring lock: {Name:mk5a4e22b238c18e7ccc84c412018fc85088176f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:30:06.182999   13060 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:30:06.183361   13060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/kubeconfig: {Name:mkfa460adb077498749c83f32a682247504db19f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:30:06.183546   13060 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:30:06.183646   13060 config.go:182] Loaded profile config "running-upgrade-494000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:30:06.183580   13060 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 05:30:06.183675   13060 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-494000"
	I1007 05:30:06.183677   13060 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-494000"
	I1007 05:30:06.183682   13060 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-494000"
	W1007 05:30:06.183686   13060 addons.go:243] addon storage-provisioner should already be in state true
	I1007 05:30:06.183699   13060 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-494000"
	I1007 05:30:06.183699   13060 host.go:66] Checking if "running-upgrade-494000" exists ...
	I1007 05:30:06.184667   13060 kapi.go:59] client config for running-upgrade-494000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/running-upgrade-494000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/running-upgrade-494000/client.key", CAFile:"/Users/jenkins/minikube-integration/18424-10771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1063c7ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 05:30:06.184790   13060 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-494000"
	W1007 05:30:06.184795   13060 addons.go:243] addon default-storageclass should already be in state true
	I1007 05:30:06.184801   13060 host.go:66] Checking if "running-upgrade-494000" exists ...
	I1007 05:30:06.186465   13060 out.go:177] * Verifying Kubernetes components...
	I1007 05:30:06.186803   13060 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 05:30:06.190560   13060 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 05:30:06.190583   13060 sshutil.go:53] new ssh client: &{IP:localhost Port:52210 SSHKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/running-upgrade-494000/id_rsa Username:docker}
	I1007 05:30:06.194462   13060 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:30:06.198529   13060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:30:06.202633   13060 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 05:30:06.202648   13060 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 05:30:06.202663   13060 sshutil.go:53] new ssh client: &{IP:localhost Port:52210 SSHKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/running-upgrade-494000/id_rsa Username:docker}
	I1007 05:30:06.281198   13060 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 05:30:06.286562   13060 api_server.go:52] waiting for apiserver process to appear ...
	I1007 05:30:06.286618   13060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 05:30:06.291162   13060 api_server.go:72] duration metric: took 107.606417ms to wait for apiserver process to appear ...
	I1007 05:30:06.291172   13060 api_server.go:88] waiting for apiserver healthz status ...
	I1007 05:30:06.291180   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:06.296411   13060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 05:30:06.316856   13060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 05:30:06.661944   13060 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 05:30:06.661956   13060 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 05:30:11.293182   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:11.293222   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:16.293450   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:16.293473   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:21.293667   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:21.293685   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:26.293979   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:26.294021   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:31.294907   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:31.294926   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:36.295583   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:36.295639   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1007 05:30:36.663863   13060 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1007 05:30:36.674203   13060 out.go:177] * Enabled addons: storage-provisioner
	I1007 05:30:36.681162   13060 addons.go:510] duration metric: took 30.498146625s for enable addons: enabled=[storage-provisioner]
	I1007 05:30:41.296638   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:41.296698   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:46.298000   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:46.298042   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:51.299644   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:51.299682   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:56.301756   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:56.301777   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:01.303828   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:01.303852   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:06.305993   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:06.306180   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:31:06.317184   13060 logs.go:282] 1 containers: [a298c7336892]
	I1007 05:31:06.317258   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:31:06.327846   13060 logs.go:282] 1 containers: [fde5262f7106]
	I1007 05:31:06.327927   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:31:06.338274   13060 logs.go:282] 2 containers: [b87a93b50113 205a727ddd11]
	I1007 05:31:06.338344   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:31:06.349274   13060 logs.go:282] 1 containers: [586c842835b6]
	I1007 05:31:06.349357   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:31:06.363742   13060 logs.go:282] 1 containers: [6ddb8e8775db]
	I1007 05:31:06.363816   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:31:06.374396   13060 logs.go:282] 1 containers: [081f9bc5b473]
	I1007 05:31:06.374473   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:31:06.388683   13060 logs.go:282] 0 containers: []
	W1007 05:31:06.388694   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:31:06.388751   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:31:06.399256   13060 logs.go:282] 1 containers: [9c51a5346c6b]
	I1007 05:31:06.399272   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:31:06.399277   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:31:06.410810   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:31:06.410821   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:31:06.430782   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:31:06.430872   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:31:06.446611   13060 logs.go:123] Gathering logs for kube-apiserver [a298c7336892] ...
	I1007 05:31:06.446616   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a298c7336892"
	I1007 05:31:06.461452   13060 logs.go:123] Gathering logs for etcd [fde5262f7106] ...
	I1007 05:31:06.461464   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fde5262f7106"
	I1007 05:31:06.476216   13060 logs.go:123] Gathering logs for coredns [b87a93b50113] ...
	I1007 05:31:06.476225   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87a93b50113"
	I1007 05:31:06.488216   13060 logs.go:123] Gathering logs for coredns [205a727ddd11] ...
	I1007 05:31:06.488228   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205a727ddd11"
	I1007 05:31:06.500220   13060 logs.go:123] Gathering logs for storage-provisioner [9c51a5346c6b] ...
	I1007 05:31:06.500230   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c51a5346c6b"
	I1007 05:31:06.511745   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:31:06.511755   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:31:06.535297   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:31:06.535304   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:31:06.539836   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:31:06.539845   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:31:06.576991   13060 logs.go:123] Gathering logs for kube-scheduler [586c842835b6] ...
	I1007 05:31:06.577002   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586c842835b6"
	I1007 05:31:06.592333   13060 logs.go:123] Gathering logs for kube-proxy [6ddb8e8775db] ...
	I1007 05:31:06.592345   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddb8e8775db"
	I1007 05:31:06.604298   13060 logs.go:123] Gathering logs for kube-controller-manager [081f9bc5b473] ...
	I1007 05:31:06.604308   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081f9bc5b473"
	I1007 05:31:06.622077   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:31:06.622092   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:31:06.622117   13060 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 05:31:06.622122   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:31:06.622126   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:31:06.622130   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:31:06.622132   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:31:16.626102   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:21.628452   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:21.628678   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:31:21.653213   13060 logs.go:282] 1 containers: [a298c7336892]
	I1007 05:31:21.653348   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:31:21.669848   13060 logs.go:282] 1 containers: [fde5262f7106]
	I1007 05:31:21.669942   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:31:21.682990   13060 logs.go:282] 2 containers: [b87a93b50113 205a727ddd11]
	I1007 05:31:21.683075   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:31:21.694711   13060 logs.go:282] 1 containers: [586c842835b6]
	I1007 05:31:21.694785   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:31:21.705014   13060 logs.go:282] 1 containers: [6ddb8e8775db]
	I1007 05:31:21.705092   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:31:21.718216   13060 logs.go:282] 1 containers: [081f9bc5b473]
	I1007 05:31:21.718294   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:31:21.728213   13060 logs.go:282] 0 containers: []
	W1007 05:31:21.728225   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:31:21.728294   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:31:21.738787   13060 logs.go:282] 1 containers: [9c51a5346c6b]
	I1007 05:31:21.738802   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:31:21.738808   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:31:21.774370   13060 logs.go:123] Gathering logs for kube-apiserver [a298c7336892] ...
	I1007 05:31:21.774385   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a298c7336892"
	I1007 05:31:21.789143   13060 logs.go:123] Gathering logs for kube-scheduler [586c842835b6] ...
	I1007 05:31:21.789152   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586c842835b6"
	I1007 05:31:21.803481   13060 logs.go:123] Gathering logs for storage-provisioner [9c51a5346c6b] ...
	I1007 05:31:21.803493   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c51a5346c6b"
	I1007 05:31:21.815497   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:31:21.815507   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:31:21.839832   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:31:21.839839   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:31:21.853024   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:31:21.853037   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:31:21.870592   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:31:21.870685   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:31:21.886969   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:31:21.886977   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:31:21.891524   13060 logs.go:123] Gathering logs for coredns [205a727ddd11] ...
	I1007 05:31:21.891531   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205a727ddd11"
	I1007 05:31:21.903405   13060 logs.go:123] Gathering logs for kube-proxy [6ddb8e8775db] ...
	I1007 05:31:21.903420   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddb8e8775db"
	I1007 05:31:21.915343   13060 logs.go:123] Gathering logs for kube-controller-manager [081f9bc5b473] ...
	I1007 05:31:21.915355   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081f9bc5b473"
	I1007 05:31:21.933001   13060 logs.go:123] Gathering logs for etcd [fde5262f7106] ...
	I1007 05:31:21.933011   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fde5262f7106"
	I1007 05:31:21.946841   13060 logs.go:123] Gathering logs for coredns [b87a93b50113] ...
	I1007 05:31:21.946852   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87a93b50113"
	I1007 05:31:21.965353   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:31:21.965364   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:31:21.965393   13060 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 05:31:21.965398   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:31:21.965401   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:31:21.965405   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:31:21.965408   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:31:31.969369   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:36.971795   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:36.972305   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:31:37.011022   13060 logs.go:282] 1 containers: [a298c7336892]
	I1007 05:31:37.011172   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:31:37.030526   13060 logs.go:282] 1 containers: [fde5262f7106]
	I1007 05:31:37.030650   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:31:37.051419   13060 logs.go:282] 2 containers: [b87a93b50113 205a727ddd11]
	I1007 05:31:37.051502   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:31:37.063244   13060 logs.go:282] 1 containers: [586c842835b6]
	I1007 05:31:37.063328   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:31:37.074545   13060 logs.go:282] 1 containers: [6ddb8e8775db]
	I1007 05:31:37.074629   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:31:37.086086   13060 logs.go:282] 1 containers: [081f9bc5b473]
	I1007 05:31:37.086159   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:31:37.097142   13060 logs.go:282] 0 containers: []
	W1007 05:31:37.097154   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:31:37.097221   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:31:37.108070   13060 logs.go:282] 1 containers: [9c51a5346c6b]
	I1007 05:31:37.108086   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:31:37.108091   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:31:37.112612   13060 logs.go:123] Gathering logs for etcd [fde5262f7106] ...
	I1007 05:31:37.112621   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fde5262f7106"
	I1007 05:31:37.127253   13060 logs.go:123] Gathering logs for coredns [b87a93b50113] ...
	I1007 05:31:37.127263   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87a93b50113"
	I1007 05:31:37.139644   13060 logs.go:123] Gathering logs for coredns [205a727ddd11] ...
	I1007 05:31:37.139656   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205a727ddd11"
	I1007 05:31:37.152628   13060 logs.go:123] Gathering logs for kube-controller-manager [081f9bc5b473] ...
	I1007 05:31:37.152638   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081f9bc5b473"
	I1007 05:31:37.171272   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:31:37.171287   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:31:37.191500   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:31:37.191594   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:31:37.207911   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:31:37.207918   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:31:37.251401   13060 logs.go:123] Gathering logs for kube-apiserver [a298c7336892] ...
	I1007 05:31:37.251411   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a298c7336892"
	I1007 05:31:37.266548   13060 logs.go:123] Gathering logs for kube-scheduler [586c842835b6] ...
	I1007 05:31:37.266563   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586c842835b6"
	I1007 05:31:37.282240   13060 logs.go:123] Gathering logs for kube-proxy [6ddb8e8775db] ...
	I1007 05:31:37.282250   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddb8e8775db"
	I1007 05:31:37.297355   13060 logs.go:123] Gathering logs for storage-provisioner [9c51a5346c6b] ...
	I1007 05:31:37.297371   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c51a5346c6b"
	I1007 05:31:37.313497   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:31:37.313507   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:31:37.338230   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:31:37.338239   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:31:37.349872   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:31:37.349882   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:31:37.349908   13060 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 05:31:37.349915   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:31:37.349919   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:31:37.349923   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:31:37.349927   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:31:47.352650   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:52.355112   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:52.355385   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:31:52.373562   13060 logs.go:282] 1 containers: [a298c7336892]
	I1007 05:31:52.373651   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:31:52.387077   13060 logs.go:282] 1 containers: [fde5262f7106]
	I1007 05:31:52.387160   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:31:52.398172   13060 logs.go:282] 2 containers: [b87a93b50113 205a727ddd11]
	I1007 05:31:52.398244   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:31:52.408985   13060 logs.go:282] 1 containers: [586c842835b6]
	I1007 05:31:52.409062   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:31:52.420058   13060 logs.go:282] 1 containers: [6ddb8e8775db]
	I1007 05:31:52.420132   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:31:52.431041   13060 logs.go:282] 1 containers: [081f9bc5b473]
	I1007 05:31:52.431115   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:31:52.441405   13060 logs.go:282] 0 containers: []
	W1007 05:31:52.441417   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:31:52.441476   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:31:52.452192   13060 logs.go:282] 1 containers: [9c51a5346c6b]
	I1007 05:31:52.452210   13060 logs.go:123] Gathering logs for kube-apiserver [a298c7336892] ...
	I1007 05:31:52.452215   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a298c7336892"
	I1007 05:31:52.466706   13060 logs.go:123] Gathering logs for etcd [fde5262f7106] ...
	I1007 05:31:52.466717   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fde5262f7106"
	I1007 05:31:52.480973   13060 logs.go:123] Gathering logs for coredns [b87a93b50113] ...
	I1007 05:31:52.480982   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87a93b50113"
	I1007 05:31:52.493767   13060 logs.go:123] Gathering logs for coredns [205a727ddd11] ...
	I1007 05:31:52.493776   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205a727ddd11"
	I1007 05:31:52.505736   13060 logs.go:123] Gathering logs for kube-controller-manager [081f9bc5b473] ...
	I1007 05:31:52.505749   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081f9bc5b473"
	I1007 05:31:52.523336   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:31:52.523346   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:31:52.528948   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:31:52.528958   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:31:52.565539   13060 logs.go:123] Gathering logs for kube-proxy [6ddb8e8775db] ...
	I1007 05:31:52.565555   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddb8e8775db"
	I1007 05:31:52.578373   13060 logs.go:123] Gathering logs for storage-provisioner [9c51a5346c6b] ...
	I1007 05:31:52.578389   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c51a5346c6b"
	I1007 05:31:52.591016   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:31:52.591027   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:31:52.615386   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:31:52.615394   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:31:52.627632   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:31:52.627644   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:31:52.646858   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:31:52.646949   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:31:52.662501   13060 logs.go:123] Gathering logs for kube-scheduler [586c842835b6] ...
	I1007 05:31:52.662508   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586c842835b6"
	I1007 05:31:52.678244   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:31:52.678257   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:31:52.678282   13060 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 05:31:52.678286   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:31:52.678290   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:31:52.678294   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:31:52.678297   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:32:02.680627   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:32:07.682883   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:32:07.683008   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:32:07.696173   13060 logs.go:282] 1 containers: [a298c7336892]
	I1007 05:32:07.696260   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:32:07.707712   13060 logs.go:282] 1 containers: [fde5262f7106]
	I1007 05:32:07.707793   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:32:07.721567   13060 logs.go:282] 2 containers: [b87a93b50113 205a727ddd11]
	I1007 05:32:07.721645   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:32:07.732560   13060 logs.go:282] 1 containers: [586c842835b6]
	I1007 05:32:07.732624   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:32:07.743533   13060 logs.go:282] 1 containers: [6ddb8e8775db]
	I1007 05:32:07.743608   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:32:07.754461   13060 logs.go:282] 1 containers: [081f9bc5b473]
	I1007 05:32:07.754543   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:32:07.764686   13060 logs.go:282] 0 containers: []
	W1007 05:32:07.764701   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:32:07.764760   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:32:07.775609   13060 logs.go:282] 1 containers: [9c51a5346c6b]
	I1007 05:32:07.775622   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:32:07.775627   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:32:07.787427   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:32:07.787436   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:32:07.792386   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:32:07.792392   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:32:07.829199   13060 logs.go:123] Gathering logs for kube-apiserver [a298c7336892] ...
	I1007 05:32:07.829214   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a298c7336892"
	I1007 05:32:07.843917   13060 logs.go:123] Gathering logs for etcd [fde5262f7106] ...
	I1007 05:32:07.843930   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fde5262f7106"
	I1007 05:32:07.857922   13060 logs.go:123] Gathering logs for coredns [205a727ddd11] ...
	I1007 05:32:07.857931   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205a727ddd11"
	I1007 05:32:07.870149   13060 logs.go:123] Gathering logs for kube-proxy [6ddb8e8775db] ...
	I1007 05:32:07.870160   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddb8e8775db"
	I1007 05:32:07.882924   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:32:07.882935   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:32:07.908667   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:32:07.908679   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:32:07.928570   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:32:07.928666   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:32:07.944662   13060 logs.go:123] Gathering logs for coredns [b87a93b50113] ...
	I1007 05:32:07.944672   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87a93b50113"
	I1007 05:32:07.957117   13060 logs.go:123] Gathering logs for kube-scheduler [586c842835b6] ...
	I1007 05:32:07.957129   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586c842835b6"
	I1007 05:32:07.975255   13060 logs.go:123] Gathering logs for kube-controller-manager [081f9bc5b473] ...
	I1007 05:32:07.975265   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081f9bc5b473"
	I1007 05:32:07.993597   13060 logs.go:123] Gathering logs for storage-provisioner [9c51a5346c6b] ...
	I1007 05:32:07.993607   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c51a5346c6b"
	I1007 05:32:08.006583   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:32:08.006597   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:32:08.006626   13060 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 05:32:08.006631   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:32:08.006635   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:32:08.006640   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:32:08.006643   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:32:18.010613   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:32:23.012791   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:32:23.013045   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:32:23.035924   13060 logs.go:282] 1 containers: [a298c7336892]
	I1007 05:32:23.036056   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:32:23.052207   13060 logs.go:282] 1 containers: [fde5262f7106]
	I1007 05:32:23.052303   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:32:23.065083   13060 logs.go:282] 4 containers: [d57fd0f376b8 33e114d3b8f3 b87a93b50113 205a727ddd11]
	I1007 05:32:23.065166   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:32:23.075958   13060 logs.go:282] 1 containers: [586c842835b6]
	I1007 05:32:23.076036   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:32:23.088554   13060 logs.go:282] 1 containers: [6ddb8e8775db]
	I1007 05:32:23.088639   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:32:23.099467   13060 logs.go:282] 1 containers: [081f9bc5b473]
	I1007 05:32:23.099548   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:32:23.109797   13060 logs.go:282] 0 containers: []
	W1007 05:32:23.109809   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:32:23.109878   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:32:23.120521   13060 logs.go:282] 1 containers: [9c51a5346c6b]
	I1007 05:32:23.120541   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:32:23.120547   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:32:23.138042   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:32:23.138134   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:32:23.153825   13060 logs.go:123] Gathering logs for coredns [b87a93b50113] ...
	I1007 05:32:23.153832   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87a93b50113"
	I1007 05:32:23.165399   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:32:23.165410   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:32:23.189828   13060 logs.go:123] Gathering logs for kube-proxy [6ddb8e8775db] ...
	I1007 05:32:23.189836   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddb8e8775db"
	I1007 05:32:23.206766   13060 logs.go:123] Gathering logs for etcd [fde5262f7106] ...
	I1007 05:32:23.206775   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fde5262f7106"
	I1007 05:32:23.220571   13060 logs.go:123] Gathering logs for kube-controller-manager [081f9bc5b473] ...
	I1007 05:32:23.220587   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081f9bc5b473"
	I1007 05:32:23.237946   13060 logs.go:123] Gathering logs for storage-provisioner [9c51a5346c6b] ...
	I1007 05:32:23.237956   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c51a5346c6b"
	I1007 05:32:23.249692   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:32:23.249702   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:32:23.261805   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:32:23.261814   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:32:23.266451   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:32:23.266457   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:32:23.302577   13060 logs.go:123] Gathering logs for kube-apiserver [a298c7336892] ...
	I1007 05:32:23.302590   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a298c7336892"
	I1007 05:32:23.317552   13060 logs.go:123] Gathering logs for coredns [d57fd0f376b8] ...
	I1007 05:32:23.317563   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57fd0f376b8"
	I1007 05:32:23.333110   13060 logs.go:123] Gathering logs for coredns [33e114d3b8f3] ...
	I1007 05:32:23.333120   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e114d3b8f3"
	I1007 05:32:23.344229   13060 logs.go:123] Gathering logs for coredns [205a727ddd11] ...
	I1007 05:32:23.344240   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205a727ddd11"
	I1007 05:32:23.355601   13060 logs.go:123] Gathering logs for kube-scheduler [586c842835b6] ...
	I1007 05:32:23.355612   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586c842835b6"
	I1007 05:32:23.371350   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:32:23.371365   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:32:23.371392   13060 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 05:32:23.371398   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:32:23.371403   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:32:23.371406   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:32:23.371410   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:32:33.375427   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:32:38.376250   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:32:38.376470   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:32:38.393201   13060 logs.go:282] 1 containers: [a298c7336892]
	I1007 05:32:38.393299   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:32:38.405823   13060 logs.go:282] 1 containers: [fde5262f7106]
	I1007 05:32:38.405905   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:32:38.417396   13060 logs.go:282] 4 containers: [d57fd0f376b8 33e114d3b8f3 b87a93b50113 205a727ddd11]
	I1007 05:32:38.417471   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:32:38.427949   13060 logs.go:282] 1 containers: [586c842835b6]
	I1007 05:32:38.428025   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:32:38.438827   13060 logs.go:282] 1 containers: [6ddb8e8775db]
	I1007 05:32:38.438899   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:32:38.449412   13060 logs.go:282] 1 containers: [081f9bc5b473]
	I1007 05:32:38.449473   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:32:38.460001   13060 logs.go:282] 0 containers: []
	W1007 05:32:38.460014   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:32:38.460079   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:32:38.470501   13060 logs.go:282] 1 containers: [9c51a5346c6b]
	I1007 05:32:38.470515   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:32:38.470520   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:32:38.506443   13060 logs.go:123] Gathering logs for kube-apiserver [a298c7336892] ...
	I1007 05:32:38.506453   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a298c7336892"
	I1007 05:32:38.521041   13060 logs.go:123] Gathering logs for etcd [fde5262f7106] ...
	I1007 05:32:38.521051   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fde5262f7106"
	I1007 05:32:38.534865   13060 logs.go:123] Gathering logs for kube-scheduler [586c842835b6] ...
	I1007 05:32:38.534874   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586c842835b6"
	I1007 05:32:38.550056   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:32:38.550064   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:32:38.573858   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:32:38.573868   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:32:38.578517   13060 logs.go:123] Gathering logs for coredns [33e114d3b8f3] ...
	I1007 05:32:38.578526   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e114d3b8f3"
	I1007 05:32:38.589977   13060 logs.go:123] Gathering logs for storage-provisioner [9c51a5346c6b] ...
	I1007 05:32:38.589987   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c51a5346c6b"
	I1007 05:32:38.601581   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:32:38.601594   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:32:38.613221   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:32:38.613233   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:32:38.631898   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:32:38.631992   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:32:38.648376   13060 logs.go:123] Gathering logs for coredns [b87a93b50113] ...
	I1007 05:32:38.648384   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87a93b50113"
	I1007 05:32:38.660070   13060 logs.go:123] Gathering logs for kube-proxy [6ddb8e8775db] ...
	I1007 05:32:38.660082   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddb8e8775db"
	I1007 05:32:38.671585   13060 logs.go:123] Gathering logs for kube-controller-manager [081f9bc5b473] ...
	I1007 05:32:38.671597   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081f9bc5b473"
	I1007 05:32:38.689510   13060 logs.go:123] Gathering logs for coredns [d57fd0f376b8] ...
	I1007 05:32:38.689521   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57fd0f376b8"
	I1007 05:32:38.701105   13060 logs.go:123] Gathering logs for coredns [205a727ddd11] ...
	I1007 05:32:38.701116   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205a727ddd11"
	I1007 05:32:38.716921   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:32:38.716931   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:32:38.716956   13060 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 05:32:38.716960   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:32:38.716973   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:32:38.716977   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:32:38.716979   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:32:48.719590   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:32:53.721867   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:32:53.721953   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:32:53.733108   13060 logs.go:282] 1 containers: [a298c7336892]
	I1007 05:32:53.733188   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:32:53.745114   13060 logs.go:282] 1 containers: [fde5262f7106]
	I1007 05:32:53.745203   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:32:53.757468   13060 logs.go:282] 4 containers: [d57fd0f376b8 33e114d3b8f3 b87a93b50113 205a727ddd11]
	I1007 05:32:53.757553   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:32:53.773012   13060 logs.go:282] 1 containers: [586c842835b6]
	I1007 05:32:53.773097   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:32:53.786808   13060 logs.go:282] 1 containers: [6ddb8e8775db]
	I1007 05:32:53.786891   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:32:53.799065   13060 logs.go:282] 1 containers: [081f9bc5b473]
	I1007 05:32:53.799148   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:32:53.810474   13060 logs.go:282] 0 containers: []
	W1007 05:32:53.810486   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:32:53.810555   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:32:53.821799   13060 logs.go:282] 1 containers: [9c51a5346c6b]
	I1007 05:32:53.821845   13060 logs.go:123] Gathering logs for etcd [fde5262f7106] ...
	I1007 05:32:53.821852   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fde5262f7106"
	I1007 05:32:53.837848   13060 logs.go:123] Gathering logs for kube-apiserver [a298c7336892] ...
	I1007 05:32:53.837863   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a298c7336892"
	I1007 05:32:53.853636   13060 logs.go:123] Gathering logs for coredns [b87a93b50113] ...
	I1007 05:32:53.853651   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87a93b50113"
	I1007 05:32:53.866270   13060 logs.go:123] Gathering logs for kube-scheduler [586c842835b6] ...
	I1007 05:32:53.866283   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586c842835b6"
	I1007 05:32:53.882359   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:32:53.882374   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:32:53.903056   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:32:53.903150   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:32:53.919118   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:32:53.919125   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:32:53.923558   13060 logs.go:123] Gathering logs for coredns [d57fd0f376b8] ...
	I1007 05:32:53.923567   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57fd0f376b8"
	I1007 05:32:53.938049   13060 logs.go:123] Gathering logs for coredns [33e114d3b8f3] ...
	I1007 05:32:53.938060   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e114d3b8f3"
	I1007 05:32:53.968383   13060 logs.go:123] Gathering logs for kube-proxy [6ddb8e8775db] ...
	I1007 05:32:53.968393   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddb8e8775db"
	I1007 05:32:53.986778   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:32:53.986792   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:32:54.011835   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:32:54.011845   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:32:54.023738   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:32:54.023749   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:32:54.058101   13060 logs.go:123] Gathering logs for coredns [205a727ddd11] ...
	I1007 05:32:54.058116   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205a727ddd11"
	I1007 05:32:54.069704   13060 logs.go:123] Gathering logs for kube-controller-manager [081f9bc5b473] ...
	I1007 05:32:54.069715   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081f9bc5b473"
	I1007 05:32:54.094156   13060 logs.go:123] Gathering logs for storage-provisioner [9c51a5346c6b] ...
	I1007 05:32:54.094172   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c51a5346c6b"
	I1007 05:32:54.106056   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:32:54.106071   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:32:54.106102   13060 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 05:32:54.106108   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:32:54.106112   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:32:54.106128   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:32:54.106133   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:33:04.110127   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:09.110886   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:09.111052   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:33:09.124454   13060 logs.go:282] 1 containers: [a298c7336892]
	I1007 05:33:09.124546   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:33:09.138785   13060 logs.go:282] 1 containers: [fde5262f7106]
	I1007 05:33:09.138869   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:33:09.149207   13060 logs.go:282] 4 containers: [d57fd0f376b8 33e114d3b8f3 b87a93b50113 205a727ddd11]
	I1007 05:33:09.149294   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:33:09.160502   13060 logs.go:282] 1 containers: [586c842835b6]
	I1007 05:33:09.160579   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:33:09.170994   13060 logs.go:282] 1 containers: [6ddb8e8775db]
	I1007 05:33:09.171075   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:33:09.181594   13060 logs.go:282] 1 containers: [081f9bc5b473]
	I1007 05:33:09.181671   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:33:09.191808   13060 logs.go:282] 0 containers: []
	W1007 05:33:09.191819   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:33:09.191881   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:33:09.202695   13060 logs.go:282] 1 containers: [9c51a5346c6b]
	I1007 05:33:09.202714   13060 logs.go:123] Gathering logs for kube-apiserver [a298c7336892] ...
	I1007 05:33:09.202719   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a298c7336892"
	I1007 05:33:09.217059   13060 logs.go:123] Gathering logs for coredns [33e114d3b8f3] ...
	I1007 05:33:09.217069   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e114d3b8f3"
	I1007 05:33:09.228706   13060 logs.go:123] Gathering logs for coredns [205a727ddd11] ...
	I1007 05:33:09.228716   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205a727ddd11"
	I1007 05:33:09.240656   13060 logs.go:123] Gathering logs for kube-controller-manager [081f9bc5b473] ...
	I1007 05:33:09.240669   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081f9bc5b473"
	I1007 05:33:09.259648   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:33:09.259657   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:33:09.271836   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:33:09.271848   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:33:09.276815   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:33:09.276822   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:33:09.313309   13060 logs.go:123] Gathering logs for kube-proxy [6ddb8e8775db] ...
	I1007 05:33:09.313319   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddb8e8775db"
	I1007 05:33:09.324792   13060 logs.go:123] Gathering logs for coredns [d57fd0f376b8] ...
	I1007 05:33:09.324807   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57fd0f376b8"
	I1007 05:33:09.337789   13060 logs.go:123] Gathering logs for kube-scheduler [586c842835b6] ...
	I1007 05:33:09.337799   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586c842835b6"
	I1007 05:33:09.353511   13060 logs.go:123] Gathering logs for coredns [b87a93b50113] ...
	I1007 05:33:09.353524   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87a93b50113"
	I1007 05:33:09.366113   13060 logs.go:123] Gathering logs for storage-provisioner [9c51a5346c6b] ...
	I1007 05:33:09.366127   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c51a5346c6b"
	I1007 05:33:09.378946   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:33:09.378960   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:33:09.407657   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:33:09.407675   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:33:09.428351   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:33:09.428449   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:33:09.444808   13060 logs.go:123] Gathering logs for etcd [fde5262f7106] ...
	I1007 05:33:09.444826   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fde5262f7106"
	I1007 05:33:09.459079   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:33:09.459088   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:33:09.459116   13060 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 05:33:09.459120   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:33:09.459123   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:33:09.459126   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:33:09.459130   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:33:19.461116   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:24.463328   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:24.463493   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:33:24.475137   13060 logs.go:282] 1 containers: [a298c7336892]
	I1007 05:33:24.475226   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:33:24.486262   13060 logs.go:282] 1 containers: [fde5262f7106]
	I1007 05:33:24.486331   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:33:24.500605   13060 logs.go:282] 4 containers: [d57fd0f376b8 33e114d3b8f3 b87a93b50113 205a727ddd11]
	I1007 05:33:24.500691   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:33:24.521017   13060 logs.go:282] 1 containers: [586c842835b6]
	I1007 05:33:24.521097   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:33:24.531274   13060 logs.go:282] 1 containers: [6ddb8e8775db]
	I1007 05:33:24.531344   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:33:24.541587   13060 logs.go:282] 1 containers: [081f9bc5b473]
	I1007 05:33:24.541665   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:33:24.551795   13060 logs.go:282] 0 containers: []
	W1007 05:33:24.551807   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:33:24.551876   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:33:24.562420   13060 logs.go:282] 1 containers: [9c51a5346c6b]
	I1007 05:33:24.562436   13060 logs.go:123] Gathering logs for coredns [d57fd0f376b8] ...
	I1007 05:33:24.562441   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57fd0f376b8"
	I1007 05:33:24.573977   13060 logs.go:123] Gathering logs for coredns [205a727ddd11] ...
	I1007 05:33:24.573989   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205a727ddd11"
	I1007 05:33:24.586009   13060 logs.go:123] Gathering logs for kube-proxy [6ddb8e8775db] ...
	I1007 05:33:24.586021   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddb8e8775db"
	I1007 05:33:24.597514   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:33:24.597524   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:33:24.609693   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:33:24.609703   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:33:24.645362   13060 logs.go:123] Gathering logs for kube-apiserver [a298c7336892] ...
	I1007 05:33:24.645376   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a298c7336892"
	I1007 05:33:24.659593   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:33:24.659602   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:33:24.683929   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:33:24.683939   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:33:24.688551   13060 logs.go:123] Gathering logs for etcd [fde5262f7106] ...
	I1007 05:33:24.688560   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fde5262f7106"
	I1007 05:33:24.702803   13060 logs.go:123] Gathering logs for coredns [33e114d3b8f3] ...
	I1007 05:33:24.702815   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e114d3b8f3"
	I1007 05:33:24.718682   13060 logs.go:123] Gathering logs for storage-provisioner [9c51a5346c6b] ...
	I1007 05:33:24.718692   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c51a5346c6b"
	I1007 05:33:24.730126   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:33:24.730134   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:33:24.748059   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:33:24.748152   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:33:24.763845   13060 logs.go:123] Gathering logs for coredns [b87a93b50113] ...
	I1007 05:33:24.763851   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87a93b50113"
	I1007 05:33:24.779841   13060 logs.go:123] Gathering logs for kube-scheduler [586c842835b6] ...
	I1007 05:33:24.779852   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586c842835b6"
	I1007 05:33:24.795234   13060 logs.go:123] Gathering logs for kube-controller-manager [081f9bc5b473] ...
	I1007 05:33:24.795246   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081f9bc5b473"
	I1007 05:33:24.813363   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:33:24.813373   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:33:24.813403   13060 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 05:33:24.813407   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:33:24.813411   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:33:24.813414   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:33:24.813418   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:33:34.817317   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:39.818099   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:39.818313   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:33:39.838681   13060 logs.go:282] 1 containers: [a298c7336892]
	I1007 05:33:39.838791   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:33:39.853124   13060 logs.go:282] 1 containers: [fde5262f7106]
	I1007 05:33:39.853213   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:33:39.865888   13060 logs.go:282] 4 containers: [d57fd0f376b8 33e114d3b8f3 b87a93b50113 205a727ddd11]
	I1007 05:33:39.865978   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:33:39.879867   13060 logs.go:282] 1 containers: [586c842835b6]
	I1007 05:33:39.879948   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:33:39.890603   13060 logs.go:282] 1 containers: [6ddb8e8775db]
	I1007 05:33:39.890702   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:33:39.904140   13060 logs.go:282] 1 containers: [081f9bc5b473]
	I1007 05:33:39.904206   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:33:39.916172   13060 logs.go:282] 0 containers: []
	W1007 05:33:39.916185   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:33:39.916252   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:33:39.927106   13060 logs.go:282] 1 containers: [9c51a5346c6b]
	I1007 05:33:39.927126   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:33:39.927132   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:33:39.963543   13060 logs.go:123] Gathering logs for coredns [d57fd0f376b8] ...
	I1007 05:33:39.963554   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57fd0f376b8"
	I1007 05:33:39.975698   13060 logs.go:123] Gathering logs for coredns [33e114d3b8f3] ...
	I1007 05:33:39.975708   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e114d3b8f3"
	I1007 05:33:39.991335   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:33:39.991346   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:33:40.011063   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:33:40.011157   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:33:40.027114   13060 logs.go:123] Gathering logs for kube-scheduler [586c842835b6] ...
	I1007 05:33:40.027121   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586c842835b6"
	I1007 05:33:40.042601   13060 logs.go:123] Gathering logs for kube-controller-manager [081f9bc5b473] ...
	I1007 05:33:40.042611   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081f9bc5b473"
	I1007 05:33:40.064365   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:33:40.064376   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:33:40.068944   13060 logs.go:123] Gathering logs for coredns [205a727ddd11] ...
	I1007 05:33:40.068951   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205a727ddd11"
	I1007 05:33:40.080876   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:33:40.080890   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:33:40.106047   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:33:40.106056   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:33:40.117390   13060 logs.go:123] Gathering logs for kube-apiserver [a298c7336892] ...
	I1007 05:33:40.117401   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a298c7336892"
	I1007 05:33:40.136304   13060 logs.go:123] Gathering logs for coredns [b87a93b50113] ...
	I1007 05:33:40.136319   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87a93b50113"
	I1007 05:33:40.148063   13060 logs.go:123] Gathering logs for kube-proxy [6ddb8e8775db] ...
	I1007 05:33:40.148076   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddb8e8775db"
	I1007 05:33:40.159659   13060 logs.go:123] Gathering logs for storage-provisioner [9c51a5346c6b] ...
	I1007 05:33:40.159672   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c51a5346c6b"
	I1007 05:33:40.179538   13060 logs.go:123] Gathering logs for etcd [fde5262f7106] ...
	I1007 05:33:40.179553   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fde5262f7106"
	I1007 05:33:40.193473   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:33:40.193486   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:33:40.193509   13060 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 05:33:40.193514   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:33:40.193517   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:33:40.193520   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:33:40.193522   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:33:50.197480   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:55.199758   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:55.199948   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:33:55.210721   13060 logs.go:282] 1 containers: [a298c7336892]
	I1007 05:33:55.210801   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:33:55.221752   13060 logs.go:282] 1 containers: [fde5262f7106]
	I1007 05:33:55.221834   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:33:55.238546   13060 logs.go:282] 4 containers: [d57fd0f376b8 33e114d3b8f3 b87a93b50113 205a727ddd11]
	I1007 05:33:55.238637   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:33:55.250942   13060 logs.go:282] 1 containers: [586c842835b6]
	I1007 05:33:55.251019   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:33:55.261831   13060 logs.go:282] 1 containers: [6ddb8e8775db]
	I1007 05:33:55.261913   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:33:55.272893   13060 logs.go:282] 1 containers: [081f9bc5b473]
	I1007 05:33:55.272972   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:33:55.283621   13060 logs.go:282] 0 containers: []
	W1007 05:33:55.283632   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:33:55.283693   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:33:55.298262   13060 logs.go:282] 1 containers: [9c51a5346c6b]
	I1007 05:33:55.298278   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:33:55.298283   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:33:55.302935   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:33:55.302941   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:33:55.352629   13060 logs.go:123] Gathering logs for etcd [fde5262f7106] ...
	I1007 05:33:55.352643   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fde5262f7106"
	I1007 05:33:55.367518   13060 logs.go:123] Gathering logs for coredns [33e114d3b8f3] ...
	I1007 05:33:55.367528   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e114d3b8f3"
	I1007 05:33:55.379438   13060 logs.go:123] Gathering logs for coredns [b87a93b50113] ...
	I1007 05:33:55.379448   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87a93b50113"
	I1007 05:33:55.392536   13060 logs.go:123] Gathering logs for kube-apiserver [a298c7336892] ...
	I1007 05:33:55.392545   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a298c7336892"
	I1007 05:33:55.406711   13060 logs.go:123] Gathering logs for coredns [205a727ddd11] ...
	I1007 05:33:55.406723   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205a727ddd11"
	I1007 05:33:55.419416   13060 logs.go:123] Gathering logs for kube-proxy [6ddb8e8775db] ...
	I1007 05:33:55.419427   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddb8e8775db"
	I1007 05:33:55.435066   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:33:55.435077   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:33:55.454031   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:33:55.454122   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:33:55.470369   13060 logs.go:123] Gathering logs for coredns [d57fd0f376b8] ...
	I1007 05:33:55.470377   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57fd0f376b8"
	I1007 05:33:55.481952   13060 logs.go:123] Gathering logs for kube-scheduler [586c842835b6] ...
	I1007 05:33:55.481963   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586c842835b6"
	I1007 05:33:55.496963   13060 logs.go:123] Gathering logs for storage-provisioner [9c51a5346c6b] ...
	I1007 05:33:55.496977   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c51a5346c6b"
	I1007 05:33:55.508852   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:33:55.508863   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:33:55.520407   13060 logs.go:123] Gathering logs for kube-controller-manager [081f9bc5b473] ...
	I1007 05:33:55.520420   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081f9bc5b473"
	I1007 05:33:55.537744   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:33:55.537754   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:33:55.560180   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:33:55.560188   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:33:55.560215   13060 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 05:33:55.560219   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:33:55.560222   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	  Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:33:55.560229   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:33:55.560232   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:34:05.564044   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:34:10.566279   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:34:10.571796   13060 out.go:201] 
	W1007 05:34:10.575860   13060 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1007 05:34:10.575878   13060 out.go:270] * 
	* 
	W1007 05:34:10.576784   13060 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:34:10.585802   13060 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-494000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-10-07 05:34:10.668176 -0700 PDT m=+1327.965767001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-494000 -n running-upgrade-494000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-494000 -n running-upgrade-494000: exit status 2 (15.746942333s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-494000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-772000          | force-systemd-flag-772000 | jenkins | v1.34.0 | 07 Oct 24 05:23 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-838000              | force-systemd-env-838000  | jenkins | v1.34.0 | 07 Oct 24 05:24 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-838000           | force-systemd-env-838000  | jenkins | v1.34.0 | 07 Oct 24 05:24 PDT | 07 Oct 24 05:24 PDT |
	| start   | -p docker-flags-871000                | docker-flags-871000       | jenkins | v1.34.0 | 07 Oct 24 05:24 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-772000             | force-systemd-flag-772000 | jenkins | v1.34.0 | 07 Oct 24 05:24 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-772000          | force-systemd-flag-772000 | jenkins | v1.34.0 | 07 Oct 24 05:24 PDT | 07 Oct 24 05:24 PDT |
	| start   | -p cert-expiration-719000             | cert-expiration-719000    | jenkins | v1.34.0 | 07 Oct 24 05:24 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-871000 ssh               | docker-flags-871000       | jenkins | v1.34.0 | 07 Oct 24 05:24 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-871000 ssh               | docker-flags-871000       | jenkins | v1.34.0 | 07 Oct 24 05:24 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-871000                | docker-flags-871000       | jenkins | v1.34.0 | 07 Oct 24 05:24 PDT | 07 Oct 24 05:24 PDT |
	| start   | -p cert-options-516000                | cert-options-516000       | jenkins | v1.34.0 | 07 Oct 24 05:24 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-516000 ssh               | cert-options-516000       | jenkins | v1.34.0 | 07 Oct 24 05:24 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-516000 -- sudo        | cert-options-516000       | jenkins | v1.34.0 | 07 Oct 24 05:24 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-516000                | cert-options-516000       | jenkins | v1.34.0 | 07 Oct 24 05:24 PDT | 07 Oct 24 05:24 PDT |
	| start   | -p running-upgrade-494000             | minikube                  | jenkins | v1.26.0 | 07 Oct 24 05:24 PDT | 07 Oct 24 05:25 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-494000             | running-upgrade-494000    | jenkins | v1.34.0 | 07 Oct 24 05:25 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-719000             | cert-expiration-719000    | jenkins | v1.34.0 | 07 Oct 24 05:27 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-719000             | cert-expiration-719000    | jenkins | v1.34.0 | 07 Oct 24 05:27 PDT | 07 Oct 24 05:27 PDT |
	| start   | -p kubernetes-upgrade-881000          | kubernetes-upgrade-881000 | jenkins | v1.34.0 | 07 Oct 24 05:27 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-881000          | kubernetes-upgrade-881000 | jenkins | v1.34.0 | 07 Oct 24 05:27 PDT | 07 Oct 24 05:27 PDT |
	| start   | -p kubernetes-upgrade-881000          | kubernetes-upgrade-881000 | jenkins | v1.34.0 | 07 Oct 24 05:27 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-881000          | kubernetes-upgrade-881000 | jenkins | v1.34.0 | 07 Oct 24 05:27 PDT | 07 Oct 24 05:27 PDT |
	| start   | -p stopped-upgrade-431000             | minikube                  | jenkins | v1.26.0 | 07 Oct 24 05:27 PDT | 07 Oct 24 05:28 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-431000 stop           | minikube                  | jenkins | v1.26.0 | 07 Oct 24 05:28 PDT | 07 Oct 24 05:28 PDT |
	| start   | -p stopped-upgrade-431000             | stopped-upgrade-431000    | jenkins | v1.34.0 | 07 Oct 24 05:28 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 05:28:35
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 05:28:35.590956   13189 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:28:35.591134   13189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:28:35.591138   13189 out.go:358] Setting ErrFile to fd 2...
	I1007 05:28:35.591142   13189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:28:35.591314   13189 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:28:35.592620   13189 out.go:352] Setting JSON to false
	I1007 05:28:35.613616   13189 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7086,"bootTime":1728297029,"procs":527,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:28:35.613680   13189 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:28:35.618627   13189 out.go:177] * [stopped-upgrade-431000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:28:35.626579   13189 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:28:35.626630   13189 notify.go:220] Checking for updates...
	I1007 05:28:35.633589   13189 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:28:35.636537   13189 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:28:35.639564   13189 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:28:35.642585   13189 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:28:35.645493   13189 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:28:35.648925   13189 config.go:182] Loaded profile config "stopped-upgrade-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:28:35.652544   13189 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1007 05:28:35.655525   13189 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:28:35.659516   13189 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 05:28:35.666546   13189 start.go:297] selected driver: qemu2
	I1007 05:28:35.666552   13189 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52462 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1007 05:28:35.666616   13189 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:28:35.669246   13189 cni.go:84] Creating CNI manager for ""
	I1007 05:28:35.669288   13189 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:28:35.669313   13189 start.go:340] cluster config:
	{Name:stopped-upgrade-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52462 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1007 05:28:35.669366   13189 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:28:35.677395   13189 out.go:177] * Starting "stopped-upgrade-431000" primary control-plane node in "stopped-upgrade-431000" cluster
	I1007 05:28:35.681526   13189 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1007 05:28:35.681542   13189 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1007 05:28:35.681551   13189 cache.go:56] Caching tarball of preloaded images
	I1007 05:28:35.681638   13189 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:28:35.681643   13189 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1007 05:28:35.681706   13189 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/config.json ...
	I1007 05:28:35.682159   13189 start.go:360] acquireMachinesLock for stopped-upgrade-431000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:28:35.682207   13189 start.go:364] duration metric: took 42.792µs to acquireMachinesLock for "stopped-upgrade-431000"
	I1007 05:28:35.682215   13189 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:28:35.682220   13189 fix.go:54] fixHost starting: 
	I1007 05:28:35.682336   13189 fix.go:112] recreateIfNeeded on stopped-upgrade-431000: state=Stopped err=<nil>
	W1007 05:28:35.682345   13189 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:28:35.686407   13189 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-431000" ...
	I1007 05:28:35.604601   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:28:35.694558   13189 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:28:35.694628   13189 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/stopped-upgrade-431000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/stopped-upgrade-431000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/stopped-upgrade-431000/qemu.pid -nic user,model=virtio,hostfwd=tcp::52428-:22,hostfwd=tcp::52429-:2376,hostname=stopped-upgrade-431000 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/stopped-upgrade-431000/disk.qcow2
	I1007 05:28:35.744464   13189 main.go:141] libmachine: STDOUT: 
	I1007 05:28:35.744490   13189 main.go:141] libmachine: STDERR: 
	I1007 05:28:35.744512   13189 main.go:141] libmachine: Waiting for VM to start (ssh -p 52428 docker@127.0.0.1)...
	I1007 05:28:40.606789   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:28:40.607376   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:28:40.651454   13060 logs.go:282] 2 containers: [eba0cb217bc2 8269cd6065ba]
	I1007 05:28:40.651599   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:28:40.675624   13060 logs.go:282] 2 containers: [dcb43861e68c 89902f34c603]
	I1007 05:28:40.675725   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:28:40.689755   13060 logs.go:282] 1 containers: [1a6f70326ce2]
	I1007 05:28:40.689839   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:28:40.701866   13060 logs.go:282] 2 containers: [a7744cef9eab 349aeab911b4]
	I1007 05:28:40.701946   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:28:40.712451   13060 logs.go:282] 1 containers: [aba6d1211c82]
	I1007 05:28:40.712528   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:28:40.723095   13060 logs.go:282] 2 containers: [a035f9be42f8 cff18c2b3cd6]
	I1007 05:28:40.723170   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:28:40.737748   13060 logs.go:282] 0 containers: []
	W1007 05:28:40.737767   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:28:40.737835   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:28:40.748564   13060 logs.go:282] 2 containers: [574be28e75fb 94aef91365df]
	I1007 05:28:40.748587   13060 logs.go:123] Gathering logs for kube-controller-manager [cff18c2b3cd6] ...
	I1007 05:28:40.748592   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cff18c2b3cd6"
	I1007 05:28:40.760484   13060 logs.go:123] Gathering logs for kube-apiserver [eba0cb217bc2] ...
	I1007 05:28:40.760494   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eba0cb217bc2"
	I1007 05:28:40.774714   13060 logs.go:123] Gathering logs for etcd [dcb43861e68c] ...
	I1007 05:28:40.774724   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcb43861e68c"
	I1007 05:28:40.788729   13060 logs.go:123] Gathering logs for coredns [1a6f70326ce2] ...
	I1007 05:28:40.788738   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6f70326ce2"
	I1007 05:28:40.800583   13060 logs.go:123] Gathering logs for kube-scheduler [a7744cef9eab] ...
	I1007 05:28:40.800597   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7744cef9eab"
	I1007 05:28:40.813821   13060 logs.go:123] Gathering logs for kube-scheduler [349aeab911b4] ...
	I1007 05:28:40.813830   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349aeab911b4"
	I1007 05:28:40.826689   13060 logs.go:123] Gathering logs for kube-controller-manager [a035f9be42f8] ...
	I1007 05:28:40.826705   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a035f9be42f8"
	I1007 05:28:40.844197   13060 logs.go:123] Gathering logs for storage-provisioner [574be28e75fb] ...
	I1007 05:28:40.844208   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574be28e75fb"
	I1007 05:28:40.855947   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:28:40.855957   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:28:40.867873   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:28:40.867884   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:28:40.905717   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:28:40.905813   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:28:40.906157   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:28:40.906162   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:28:40.910208   13060 logs.go:123] Gathering logs for etcd [89902f34c603] ...
	I1007 05:28:40.910215   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89902f34c603"
	I1007 05:28:40.927822   13060 logs.go:123] Gathering logs for storage-provisioner [94aef91365df] ...
	I1007 05:28:40.927834   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94aef91365df"
	I1007 05:28:40.938831   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:28:40.938841   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:28:40.961554   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:28:40.961567   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:28:41.005483   13060 logs.go:123] Gathering logs for kube-apiserver [8269cd6065ba] ...
	I1007 05:28:41.005494   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8269cd6065ba"
	I1007 05:28:41.024553   13060 logs.go:123] Gathering logs for kube-proxy [aba6d1211c82] ...
	I1007 05:28:41.024565   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aba6d1211c82"
	I1007 05:28:41.036712   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:28:41.036723   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:28:41.036748   13060 out.go:270] X Problems detected in kubelet:
	W1007 05:28:41.036755   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:28:41.036758   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:28:41.036761   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:28:41.036766   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:28:51.040761   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:28:56.043015   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:28:56.043123   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:28:56.065650   13060 logs.go:282] 2 containers: [eba0cb217bc2 8269cd6065ba]
	I1007 05:28:56.065731   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:28:56.080882   13060 logs.go:282] 2 containers: [dcb43861e68c 89902f34c603]
	I1007 05:28:56.080961   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:28:56.094286   13060 logs.go:282] 1 containers: [1a6f70326ce2]
	I1007 05:28:56.094370   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:28:56.105916   13060 logs.go:282] 2 containers: [a7744cef9eab 349aeab911b4]
	I1007 05:28:56.105999   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:28:56.116638   13060 logs.go:282] 1 containers: [aba6d1211c82]
	I1007 05:28:56.116719   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:28:56.127994   13060 logs.go:282] 2 containers: [a035f9be42f8 cff18c2b3cd6]
	I1007 05:28:56.128093   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:28:56.138872   13060 logs.go:282] 0 containers: []
	W1007 05:28:56.138886   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:28:56.138951   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:28:56.149892   13060 logs.go:282] 2 containers: [574be28e75fb 94aef91365df]
	I1007 05:28:56.149910   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:28:56.149916   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:28:56.154081   13060 logs.go:123] Gathering logs for kube-apiserver [eba0cb217bc2] ...
	I1007 05:28:56.154088   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eba0cb217bc2"
	I1007 05:28:56.172806   13060 logs.go:123] Gathering logs for etcd [dcb43861e68c] ...
	I1007 05:28:56.172819   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcb43861e68c"
	I1007 05:28:56.187340   13060 logs.go:123] Gathering logs for kube-controller-manager [cff18c2b3cd6] ...
	I1007 05:28:56.187350   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cff18c2b3cd6"
	I1007 05:28:56.199652   13060 logs.go:123] Gathering logs for storage-provisioner [94aef91365df] ...
	I1007 05:28:56.199662   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94aef91365df"
	I1007 05:28:56.211639   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:28:56.211651   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:28:56.252904   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:28:56.252999   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:28:56.253347   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:28:56.253353   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:28:56.289746   13060 logs.go:123] Gathering logs for etcd [89902f34c603] ...
	I1007 05:28:56.289758   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89902f34c603"
	I1007 05:28:56.308661   13060 logs.go:123] Gathering logs for kube-proxy [aba6d1211c82] ...
	I1007 05:28:56.308674   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aba6d1211c82"
	I1007 05:28:56.323422   13060 logs.go:123] Gathering logs for kube-apiserver [8269cd6065ba] ...
	I1007 05:28:56.323434   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8269cd6065ba"
	I1007 05:28:56.343054   13060 logs.go:123] Gathering logs for coredns [1a6f70326ce2] ...
	I1007 05:28:56.343065   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6f70326ce2"
	I1007 05:28:56.355098   13060 logs.go:123] Gathering logs for kube-scheduler [349aeab911b4] ...
	I1007 05:28:56.355110   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349aeab911b4"
	I1007 05:28:56.367776   13060 logs.go:123] Gathering logs for kube-controller-manager [a035f9be42f8] ...
	I1007 05:28:56.367786   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a035f9be42f8"
	I1007 05:28:56.402508   13060 logs.go:123] Gathering logs for kube-scheduler [a7744cef9eab] ...
	I1007 05:28:56.402517   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7744cef9eab"
	I1007 05:28:56.418362   13060 logs.go:123] Gathering logs for storage-provisioner [574be28e75fb] ...
	I1007 05:28:56.418373   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574be28e75fb"
	I1007 05:28:56.433679   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:28:56.433689   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:28:56.457843   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:28:56.457854   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:28:56.470174   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:28:56.470185   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:28:56.470211   13060 out.go:270] X Problems detected in kubelet:
	W1007 05:28:56.470216   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:28:56.470220   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:28:56.470223   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:28:56.470226   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:28:55.939687   13189 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/config.json ...
	I1007 05:28:55.940576   13189 machine.go:93] provisionDockerMachine start ...
	I1007 05:28:55.940842   13189 main.go:141] libmachine: Using SSH client type: native
	I1007 05:28:55.941371   13189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032de1f0] 0x1032e0a30 <nil>  [] 0s} localhost 52428 <nil> <nil>}
	I1007 05:28:55.941389   13189 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 05:28:56.024608   13189 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1007 05:28:56.024637   13189 buildroot.go:166] provisioning hostname "stopped-upgrade-431000"
	I1007 05:28:56.024752   13189 main.go:141] libmachine: Using SSH client type: native
	I1007 05:28:56.024936   13189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032de1f0] 0x1032e0a30 <nil>  [] 0s} localhost 52428 <nil> <nil>}
	I1007 05:28:56.024946   13189 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-431000 && echo "stopped-upgrade-431000" | sudo tee /etc/hostname
	I1007 05:28:56.095230   13189 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-431000
	
	I1007 05:28:56.095288   13189 main.go:141] libmachine: Using SSH client type: native
	I1007 05:28:56.095402   13189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032de1f0] 0x1032e0a30 <nil>  [] 0s} localhost 52428 <nil> <nil>}
	I1007 05:28:56.095411   13189 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-431000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-431000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-431000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 05:28:56.158538   13189 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 05:28:56.158553   13189 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18424-10771/.minikube CaCertPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18424-10771/.minikube}
	I1007 05:28:56.158573   13189 buildroot.go:174] setting up certificates
	I1007 05:28:56.158577   13189 provision.go:84] configureAuth start
	I1007 05:28:56.158581   13189 provision.go:143] copyHostCerts
	I1007 05:28:56.158655   13189 exec_runner.go:144] found /Users/jenkins/minikube-integration/18424-10771/.minikube/ca.pem, removing ...
	I1007 05:28:56.158664   13189 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18424-10771/.minikube/ca.pem
	I1007 05:28:56.158767   13189 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18424-10771/.minikube/ca.pem (1082 bytes)
	I1007 05:28:56.158992   13189 exec_runner.go:144] found /Users/jenkins/minikube-integration/18424-10771/.minikube/cert.pem, removing ...
	I1007 05:28:56.158996   13189 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18424-10771/.minikube/cert.pem
	I1007 05:28:56.159040   13189 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18424-10771/.minikube/cert.pem (1123 bytes)
	I1007 05:28:56.159155   13189 exec_runner.go:144] found /Users/jenkins/minikube-integration/18424-10771/.minikube/key.pem, removing ...
	I1007 05:28:56.159160   13189 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18424-10771/.minikube/key.pem
	I1007 05:28:56.159200   13189 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18424-10771/.minikube/key.pem (1675 bytes)
	I1007 05:28:56.159292   13189 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-431000 san=[127.0.0.1 localhost minikube stopped-upgrade-431000]
	I1007 05:28:56.395392   13189 provision.go:177] copyRemoteCerts
	I1007 05:28:56.395457   13189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 05:28:56.395470   13189 sshutil.go:53] new ssh client: &{IP:localhost Port:52428 SSHKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/stopped-upgrade-431000/id_rsa Username:docker}
	I1007 05:28:56.429902   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 05:28:56.437938   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 05:28:56.446100   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1007 05:28:56.453845   13189 provision.go:87] duration metric: took 295.255959ms to configureAuth
	I1007 05:28:56.453858   13189 buildroot.go:189] setting minikube options for container-runtime
	I1007 05:28:56.454004   13189 config.go:182] Loaded profile config "stopped-upgrade-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:28:56.454076   13189 main.go:141] libmachine: Using SSH client type: native
	I1007 05:28:56.454171   13189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032de1f0] 0x1032e0a30 <nil>  [] 0s} localhost 52428 <nil> <nil>}
	I1007 05:28:56.454177   13189 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1007 05:28:56.519277   13189 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1007 05:28:56.519288   13189 buildroot.go:70] root file system type: tmpfs
	I1007 05:28:56.519345   13189 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1007 05:28:56.519416   13189 main.go:141] libmachine: Using SSH client type: native
	I1007 05:28:56.519530   13189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032de1f0] 0x1032e0a30 <nil>  [] 0s} localhost 52428 <nil> <nil>}
	I1007 05:28:56.519563   13189 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1007 05:28:56.583834   13189 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1007 05:28:56.583890   13189 main.go:141] libmachine: Using SSH client type: native
	I1007 05:28:56.584001   13189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032de1f0] 0x1032e0a30 <nil>  [] 0s} localhost 52428 <nil> <nil>}
	I1007 05:28:56.584013   13189 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1007 05:28:56.959177   13189 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1007 05:28:56.959190   13189 machine.go:96] duration metric: took 1.018621292s to provisionDockerMachine
	I1007 05:28:56.959199   13189 start.go:293] postStartSetup for "stopped-upgrade-431000" (driver="qemu2")
	I1007 05:28:56.959205   13189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 05:28:56.959275   13189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 05:28:56.959285   13189 sshutil.go:53] new ssh client: &{IP:localhost Port:52428 SSHKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/stopped-upgrade-431000/id_rsa Username:docker}
	I1007 05:28:56.993700   13189 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 05:28:56.995039   13189 info.go:137] Remote host: Buildroot 2021.02.12
	I1007 05:28:56.995048   13189 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18424-10771/.minikube/addons for local assets ...
	I1007 05:28:56.995124   13189 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18424-10771/.minikube/files for local assets ...
	I1007 05:28:56.995219   13189 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18424-10771/.minikube/files/etc/ssl/certs/112842.pem -> 112842.pem in /etc/ssl/certs
	I1007 05:28:56.995341   13189 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 05:28:56.998498   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/files/etc/ssl/certs/112842.pem --> /etc/ssl/certs/112842.pem (1708 bytes)
	I1007 05:28:57.005848   13189 start.go:296] duration metric: took 46.644916ms for postStartSetup
	I1007 05:28:57.005862   13189 fix.go:56] duration metric: took 21.324037833s for fixHost
	I1007 05:28:57.005910   13189 main.go:141] libmachine: Using SSH client type: native
	I1007 05:28:57.006012   13189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032de1f0] 0x1032e0a30 <nil>  [] 0s} localhost 52428 <nil> <nil>}
	I1007 05:28:57.006018   13189 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 05:28:57.067491   13189 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728304137.548994296
	
	I1007 05:28:57.067501   13189 fix.go:216] guest clock: 1728304137.548994296
	I1007 05:28:57.067505   13189 fix.go:229] Guest: 2024-10-07 05:28:57.548994296 -0700 PDT Remote: 2024-10-07 05:28:57.005864 -0700 PDT m=+21.448427251 (delta=543.130296ms)
	I1007 05:28:57.067516   13189 fix.go:200] guest clock delta is within tolerance: 543.130296ms
	I1007 05:28:57.067519   13189 start.go:83] releasing machines lock for "stopped-upgrade-431000", held for 21.38570325s
	I1007 05:28:57.067600   13189 ssh_runner.go:195] Run: cat /version.json
	I1007 05:28:57.067611   13189 sshutil.go:53] new ssh client: &{IP:localhost Port:52428 SSHKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/stopped-upgrade-431000/id_rsa Username:docker}
	I1007 05:28:57.067683   13189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 05:28:57.067704   13189 sshutil.go:53] new ssh client: &{IP:localhost Port:52428 SSHKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/stopped-upgrade-431000/id_rsa Username:docker}
	W1007 05:28:57.068118   13189 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:52566->127.0.0.1:52428: write: broken pipe
	I1007 05:28:57.068135   13189 retry.go:31] will retry after 316.684788ms: ssh: handshake failed: write tcp 127.0.0.1:52566->127.0.0.1:52428: write: broken pipe
	W1007 05:28:57.425724   13189 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1007 05:28:57.425850   13189 ssh_runner.go:195] Run: systemctl --version
	I1007 05:28:57.429025   13189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 05:28:57.431557   13189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 05:28:57.431623   13189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1007 05:28:57.435620   13189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1007 05:28:57.441855   13189 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 05:28:57.441865   13189 start.go:495] detecting cgroup driver to use...
	I1007 05:28:57.441955   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 05:28:57.449216   13189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1007 05:28:57.453367   13189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1007 05:28:57.456594   13189 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1007 05:28:57.456626   13189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1007 05:28:57.459463   13189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1007 05:28:57.462461   13189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1007 05:28:57.465756   13189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1007 05:28:57.468938   13189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 05:28:57.471870   13189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1007 05:28:57.474746   13189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1007 05:28:57.478257   13189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1007 05:28:57.481644   13189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 05:28:57.484462   13189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 05:28:57.487008   13189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:28:57.570694   13189 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1007 05:28:57.581806   13189 start.go:495] detecting cgroup driver to use...
	I1007 05:28:57.581891   13189 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1007 05:28:57.588711   13189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 05:28:57.592887   13189 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 05:28:57.599173   13189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 05:28:57.604383   13189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1007 05:28:57.608967   13189 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1007 05:28:57.662112   13189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1007 05:28:57.667879   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 05:28:57.673652   13189 ssh_runner.go:195] Run: which cri-dockerd
	I1007 05:28:57.674871   13189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1007 05:28:57.678046   13189 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1007 05:28:57.683012   13189 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1007 05:28:57.765468   13189 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1007 05:28:57.845559   13189 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1007 05:28:57.845618   13189 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1007 05:28:57.850883   13189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:28:57.933569   13189 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1007 05:28:59.088166   13189 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.154600791s)
	I1007 05:28:59.088239   13189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1007 05:28:59.092932   13189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1007 05:28:59.097475   13189 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1007 05:28:59.170413   13189 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1007 05:28:59.252122   13189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:28:59.329744   13189 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1007 05:28:59.335725   13189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1007 05:28:59.340216   13189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:28:59.422940   13189 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1007 05:28:59.461258   13189 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1007 05:28:59.461360   13189 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1007 05:28:59.464285   13189 start.go:563] Will wait 60s for crictl version
	I1007 05:28:59.464364   13189 ssh_runner.go:195] Run: which crictl
	I1007 05:28:59.465844   13189 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 05:28:59.480795   13189 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1007 05:28:59.480885   13189 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1007 05:28:59.497349   13189 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1007 05:28:59.518369   13189 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1007 05:28:59.518498   13189 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1007 05:28:59.519868   13189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 05:28:59.523445   13189 kubeadm.go:883] updating cluster {Name:stopped-upgrade-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52462 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1007 05:28:59.523490   13189 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1007 05:28:59.523537   13189 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1007 05:28:59.534411   13189 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1007 05:28:59.534420   13189 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1007 05:28:59.534481   13189 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1007 05:28:59.537964   13189 ssh_runner.go:195] Run: which lz4
	I1007 05:28:59.539168   13189 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 05:28:59.540463   13189 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 05:28:59.540478   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1007 05:29:00.486927   13189 docker.go:649] duration metric: took 947.816083ms to copy over tarball
	I1007 05:29:00.487006   13189 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 05:29:01.683273   13189 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.196266916s)
	I1007 05:29:01.683289   13189 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 05:29:01.699037   13189 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1007 05:29:01.702429   13189 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1007 05:29:01.707506   13189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:29:01.790911   13189 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1007 05:29:03.436893   13189 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.645996917s)
	I1007 05:29:03.436992   13189 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1007 05:29:03.450576   13189 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1007 05:29:03.450587   13189 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1007 05:29:03.450593   13189 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1007 05:29:03.454810   13189 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:29:03.457081   13189 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:29:03.458356   13189 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:29:03.458484   13189 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:29:03.460262   13189 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:29:03.460282   13189 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:29:03.461688   13189 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:29:03.461840   13189 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:29:03.462626   13189 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:29:03.463213   13189 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:29:03.464518   13189 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1007 05:29:03.464780   13189 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:29:03.465459   13189 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1007 05:29:03.465792   13189 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:29:03.466402   13189 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1007 05:29:03.467235   13189 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1007 05:29:04.033583   13189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:29:04.035567   13189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:29:04.046068   13189 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1007 05:29:04.046110   13189 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:29:04.046172   13189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:29:04.049155   13189 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1007 05:29:04.049179   13189 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:29:04.049248   13189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:29:04.064046   13189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1007 05:29:04.066659   13189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1007 05:29:04.076855   13189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:29:04.088172   13189 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1007 05:29:04.088191   13189 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:29:04.088257   13189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:29:04.098286   13189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1007 05:29:04.118259   13189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:29:04.128192   13189 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1007 05:29:04.128212   13189 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:29:04.128270   13189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:29:04.138110   13189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	W1007 05:29:04.146294   13189 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1007 05:29:04.146426   13189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:29:04.156293   13189 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1007 05:29:04.156315   13189 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:29:04.156370   13189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:29:04.165887   13189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1007 05:29:04.166017   13189 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1007 05:29:04.168279   13189 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1007 05:29:04.168289   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1007 05:29:04.209856   13189 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1007 05:29:04.209869   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1007 05:29:04.246895   13189 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1007 05:29:04.253211   13189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1007 05:29:04.263127   13189 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1007 05:29:04.263148   13189 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1007 05:29:04.263210   13189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1007 05:29:04.264259   13189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1007 05:29:04.276483   13189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1007 05:29:04.276638   13189 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1007 05:29:04.282902   13189 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1007 05:29:04.282929   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1007 05:29:04.282981   13189 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1007 05:29:04.282998   13189 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1007 05:29:04.283048   13189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1007 05:29:04.292064   13189 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1007 05:29:04.292082   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1007 05:29:04.295566   13189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1007 05:29:04.295711   13189 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	W1007 05:29:04.315226   13189 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1007 05:29:04.315331   13189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:29:04.331014   13189 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1007 05:29:04.331085   13189 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1007 05:29:04.331110   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I1007 05:29:04.331647   13189 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1007 05:29:04.331668   13189 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:29:04.331715   13189 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:29:04.361715   13189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1007 05:29:04.361900   13189 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1007 05:29:04.372598   13189 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1007 05:29:04.372633   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1007 05:29:04.446709   13189 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1007 05:29:04.446724   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1007 05:29:04.808664   13189 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1007 05:29:04.808687   13189 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1007 05:29:04.808695   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1007 05:29:04.943874   13189 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1007 05:29:04.943919   13189 cache_images.go:92] duration metric: took 1.493347292s to LoadCachedImages
	W1007 05:29:04.943981   13189 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I1007 05:29:04.943987   13189 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1007 05:29:04.944046   13189 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-431000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 05:29:04.944130   13189 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1007 05:29:04.957750   13189 cni.go:84] Creating CNI manager for ""
	I1007 05:29:04.957766   13189 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:29:04.957775   13189 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 05:29:04.957786   13189 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-431000 NodeName:stopped-upgrade-431000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 05:29:04.957851   13189 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-431000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 05:29:04.957921   13189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1007 05:29:04.961801   13189 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 05:29:04.961842   13189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 05:29:04.964983   13189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1007 05:29:04.970107   13189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 05:29:04.975456   13189 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1007 05:29:04.980837   13189 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1007 05:29:04.982045   13189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 05:29:04.985586   13189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:29:05.062687   13189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 05:29:05.068319   13189 certs.go:68] Setting up /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000 for IP: 10.0.2.15
	I1007 05:29:05.068327   13189 certs.go:194] generating shared ca certs ...
	I1007 05:29:05.068336   13189 certs.go:226] acquiring lock for ca certs: {Name:mkc7f2d51afe66903c603984849255f5d4b47504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:29:05.068511   13189 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/ca.key
	I1007 05:29:05.068551   13189 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/proxy-client-ca.key
	I1007 05:29:05.068559   13189 certs.go:256] generating profile certs ...
	I1007 05:29:05.068620   13189 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/client.key
	I1007 05:29:05.068638   13189 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/apiserver.key.67a38bc6
	I1007 05:29:05.068651   13189 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/apiserver.crt.67a38bc6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1007 05:29:05.125855   13189 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/apiserver.crt.67a38bc6 ...
	I1007 05:29:05.125869   13189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/apiserver.crt.67a38bc6: {Name:mka9eac84c12dce0636ec1fb7e6b06bf09b3c1be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:29:05.126341   13189 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/apiserver.key.67a38bc6 ...
	I1007 05:29:05.126347   13189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/apiserver.key.67a38bc6: {Name:mkb2381e1c0063e6b89ce0166903306a3ddcd99b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:29:05.126558   13189 certs.go:381] copying /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/apiserver.crt.67a38bc6 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/apiserver.crt
	I1007 05:29:05.126681   13189 certs.go:385] copying /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/apiserver.key.67a38bc6 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/apiserver.key
	I1007 05:29:05.126821   13189 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/proxy-client.key
	I1007 05:29:05.126966   13189 certs.go:484] found cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/11284.pem (1338 bytes)
	W1007 05:29:05.126990   13189 certs.go:480] ignoring /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/11284_empty.pem, impossibly tiny 0 bytes
	I1007 05:29:05.126995   13189 certs.go:484] found cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 05:29:05.127022   13189 certs.go:484] found cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem (1082 bytes)
	I1007 05:29:05.127040   13189 certs.go:484] found cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem (1123 bytes)
	I1007 05:29:05.127057   13189 certs.go:484] found cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/key.pem (1675 bytes)
	I1007 05:29:05.127095   13189 certs.go:484] found cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/files/etc/ssl/certs/112842.pem (1708 bytes)
	I1007 05:29:05.127470   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 05:29:05.134372   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 05:29:05.141138   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 05:29:05.147951   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 05:29:05.154942   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1007 05:29:05.162075   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 05:29:05.169545   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 05:29:05.177179   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 05:29:05.184484   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 05:29:05.191186   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/11284.pem --> /usr/share/ca-certificates/11284.pem (1338 bytes)
	I1007 05:29:05.198219   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/files/etc/ssl/certs/112842.pem --> /usr/share/ca-certificates/112842.pem (1708 bytes)
	I1007 05:29:05.205364   13189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 05:29:05.210399   13189 ssh_runner.go:195] Run: openssl version
	I1007 05:29:05.212328   13189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 05:29:05.215063   13189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 05:29:05.216822   13189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I1007 05:29:05.216851   13189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 05:29:05.218501   13189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 05:29:05.221840   13189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11284.pem && ln -fs /usr/share/ca-certificates/11284.pem /etc/ssl/certs/11284.pem"
	I1007 05:29:05.225085   13189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11284.pem
	I1007 05:29:05.226474   13189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:13 /usr/share/ca-certificates/11284.pem
	I1007 05:29:05.226499   13189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11284.pem
	I1007 05:29:05.228329   13189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11284.pem /etc/ssl/certs/51391683.0"
	I1007 05:29:05.231151   13189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112842.pem && ln -fs /usr/share/ca-certificates/112842.pem /etc/ssl/certs/112842.pem"
	I1007 05:29:05.234388   13189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112842.pem
	I1007 05:29:05.235796   13189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:13 /usr/share/ca-certificates/112842.pem
	I1007 05:29:05.235820   13189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112842.pem
	I1007 05:29:05.237489   13189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112842.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 05:29:05.240492   13189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 05:29:05.241925   13189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 05:29:05.244812   13189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 05:29:05.246645   13189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 05:29:05.248637   13189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 05:29:05.250414   13189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 05:29:05.252223   13189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 05:29:05.253965   13189 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52462 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1007 05:29:05.254033   13189 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1007 05:29:05.264348   13189 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 05:29:05.267484   13189 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 05:29:05.267489   13189 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 05:29:05.267515   13189 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 05:29:05.270307   13189 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 05:29:05.270593   13189 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-431000" does not appear in /Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:29:05.270693   13189 kubeconfig.go:62] /Users/jenkins/minikube-integration/18424-10771/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-431000" cluster setting kubeconfig missing "stopped-upgrade-431000" context setting]
	I1007 05:29:05.270889   13189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/kubeconfig: {Name:mkfa460adb077498749c83f32a682247504db19f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:29:05.271319   13189 kapi.go:59] client config for stopped-upgrade-431000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/18424-10771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104d33ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 05:29:05.271668   13189 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 05:29:05.274514   13189 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-431000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1007 05:29:05.274521   13189 kubeadm.go:1160] stopping kube-system containers ...
	I1007 05:29:05.274569   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1007 05:29:05.285689   13189 docker.go:483] Stopping containers: [870237c16304 ee10baafa906 d47d3188153e ae6910d4e111 84309b560471 96c0fdc311b8 17f5d7610b4a 6ce14c0f1d79]
	I1007 05:29:05.285765   13189 ssh_runner.go:195] Run: docker stop 870237c16304 ee10baafa906 d47d3188153e ae6910d4e111 84309b560471 96c0fdc311b8 17f5d7610b4a 6ce14c0f1d79
	I1007 05:29:05.297097   13189 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1007 05:29:05.302833   13189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 05:29:05.306082   13189 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 05:29:05.306087   13189 kubeadm.go:157] found existing configuration files:
	
	I1007 05:29:05.306116   13189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/admin.conf
	I1007 05:29:05.309349   13189 kubeadm.go:163] "https://control-plane.minikube.internal:52462" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 05:29:05.309380   13189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 05:29:05.311938   13189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/kubelet.conf
	I1007 05:29:05.314412   13189 kubeadm.go:163] "https://control-plane.minikube.internal:52462" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 05:29:05.314443   13189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 05:29:05.317648   13189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/controller-manager.conf
	I1007 05:29:05.320703   13189 kubeadm.go:163] "https://control-plane.minikube.internal:52462" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 05:29:05.320729   13189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 05:29:05.323162   13189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/scheduler.conf
	I1007 05:29:05.326083   13189 kubeadm.go:163] "https://control-plane.minikube.internal:52462" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 05:29:05.326109   13189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 05:29:05.328991   13189 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 05:29:05.331749   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:29:05.353000   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:29:06.474168   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:05.873772   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:29:06.007841   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:29:06.032302   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:29:06.055719   13189 api_server.go:52] waiting for apiserver process to appear ...
	I1007 05:29:06.055818   13189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 05:29:06.556750   13189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 05:29:07.057881   13189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 05:29:07.062256   13189 api_server.go:72] duration metric: took 1.006557416s to wait for apiserver process to appear ...
	I1007 05:29:07.062272   13189 api_server.go:88] waiting for apiserver healthz status ...
	I1007 05:29:07.062282   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:11.476328   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:11.476511   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:29:11.487631   13060 logs.go:282] 2 containers: [eba0cb217bc2 8269cd6065ba]
	I1007 05:29:11.487705   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:29:11.498319   13060 logs.go:282] 2 containers: [dcb43861e68c 89902f34c603]
	I1007 05:29:11.498399   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:29:11.508691   13060 logs.go:282] 1 containers: [1a6f70326ce2]
	I1007 05:29:11.508770   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:29:11.519624   13060 logs.go:282] 2 containers: [a7744cef9eab 349aeab911b4]
	I1007 05:29:11.519716   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:29:11.530535   13060 logs.go:282] 1 containers: [aba6d1211c82]
	I1007 05:29:11.530615   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:29:11.548237   13060 logs.go:282] 2 containers: [a035f9be42f8 cff18c2b3cd6]
	I1007 05:29:11.548321   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:29:11.558626   13060 logs.go:282] 0 containers: []
	W1007 05:29:11.558638   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:29:11.558695   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:29:11.569229   13060 logs.go:282] 2 containers: [574be28e75fb 94aef91365df]
	I1007 05:29:11.569249   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:29:11.569254   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:29:11.594096   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:29:11.594111   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:29:11.606191   13060 logs.go:123] Gathering logs for etcd [dcb43861e68c] ...
	I1007 05:29:11.606202   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcb43861e68c"
	I1007 05:29:11.621770   13060 logs.go:123] Gathering logs for coredns [1a6f70326ce2] ...
	I1007 05:29:11.621784   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6f70326ce2"
	I1007 05:29:11.633331   13060 logs.go:123] Gathering logs for kube-scheduler [a7744cef9eab] ...
	I1007 05:29:11.633342   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7744cef9eab"
	I1007 05:29:11.645478   13060 logs.go:123] Gathering logs for kube-scheduler [349aeab911b4] ...
	I1007 05:29:11.645489   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349aeab911b4"
	I1007 05:29:11.664076   13060 logs.go:123] Gathering logs for etcd [89902f34c603] ...
	I1007 05:29:11.664088   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89902f34c603"
	I1007 05:29:11.689512   13060 logs.go:123] Gathering logs for storage-provisioner [574be28e75fb] ...
	I1007 05:29:11.689525   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574be28e75fb"
	I1007 05:29:11.704932   13060 logs.go:123] Gathering logs for storage-provisioner [94aef91365df] ...
	I1007 05:29:11.704947   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94aef91365df"
	I1007 05:29:11.717942   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:29:11.717953   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:29:11.758741   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:29:11.758844   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:29:11.759195   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:29:11.759203   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:29:11.763693   13060 logs.go:123] Gathering logs for kube-controller-manager [a035f9be42f8] ...
	I1007 05:29:11.763702   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a035f9be42f8"
	I1007 05:29:11.782887   13060 logs.go:123] Gathering logs for kube-controller-manager [cff18c2b3cd6] ...
	I1007 05:29:11.782899   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cff18c2b3cd6"
	I1007 05:29:11.798389   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:29:11.798403   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:29:11.833058   13060 logs.go:123] Gathering logs for kube-apiserver [eba0cb217bc2] ...
	I1007 05:29:11.833071   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eba0cb217bc2"
	I1007 05:29:11.847565   13060 logs.go:123] Gathering logs for kube-apiserver [8269cd6065ba] ...
	I1007 05:29:11.847578   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8269cd6065ba"
	I1007 05:29:11.867681   13060 logs.go:123] Gathering logs for kube-proxy [aba6d1211c82] ...
	I1007 05:29:11.867695   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aba6d1211c82"
	I1007 05:29:11.880312   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:29:11.880322   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:29:11.880348   13060 out.go:270] X Problems detected in kubelet:
	W1007 05:29:11.880356   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:29:11.880359   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:29:11.880364   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:29:11.880367   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:29:12.064269   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:12.064316   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:17.064473   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:17.064509   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:21.884376   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:22.065069   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:22.065111   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:26.886509   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:26.886700   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:29:26.898640   13060 logs.go:282] 2 containers: [eba0cb217bc2 8269cd6065ba]
	I1007 05:29:26.898723   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:29:26.909444   13060 logs.go:282] 2 containers: [dcb43861e68c 89902f34c603]
	I1007 05:29:26.909513   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:29:26.919777   13060 logs.go:282] 1 containers: [1a6f70326ce2]
	I1007 05:29:26.919859   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:29:26.930282   13060 logs.go:282] 2 containers: [a7744cef9eab 349aeab911b4]
	I1007 05:29:26.930354   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:29:26.941119   13060 logs.go:282] 1 containers: [aba6d1211c82]
	I1007 05:29:26.941204   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:29:26.951663   13060 logs.go:282] 2 containers: [a035f9be42f8 cff18c2b3cd6]
	I1007 05:29:26.951732   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:29:26.962261   13060 logs.go:282] 0 containers: []
	W1007 05:29:26.962273   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:29:26.962342   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:29:26.973542   13060 logs.go:282] 2 containers: [574be28e75fb 94aef91365df]
	I1007 05:29:26.973564   13060 logs.go:123] Gathering logs for kube-scheduler [349aeab911b4] ...
	I1007 05:29:26.973569   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349aeab911b4"
	I1007 05:29:26.985501   13060 logs.go:123] Gathering logs for storage-provisioner [574be28e75fb] ...
	I1007 05:29:26.985512   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574be28e75fb"
	I1007 05:29:26.997332   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:29:26.997341   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:29:27.008986   13060 logs.go:123] Gathering logs for kube-apiserver [eba0cb217bc2] ...
	I1007 05:29:27.009003   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eba0cb217bc2"
	I1007 05:29:27.022974   13060 logs.go:123] Gathering logs for etcd [89902f34c603] ...
	I1007 05:29:27.022988   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89902f34c603"
	I1007 05:29:27.040474   13060 logs.go:123] Gathering logs for coredns [1a6f70326ce2] ...
	I1007 05:29:27.040486   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6f70326ce2"
	I1007 05:29:27.051985   13060 logs.go:123] Gathering logs for kube-controller-manager [a035f9be42f8] ...
	I1007 05:29:27.051995   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a035f9be42f8"
	I1007 05:29:27.068984   13060 logs.go:123] Gathering logs for kube-controller-manager [cff18c2b3cd6] ...
	I1007 05:29:27.068992   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cff18c2b3cd6"
	I1007 05:29:27.080695   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:29:27.080704   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:29:27.115571   13060 logs.go:123] Gathering logs for kube-apiserver [8269cd6065ba] ...
	I1007 05:29:27.115586   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8269cd6065ba"
	I1007 05:29:27.135143   13060 logs.go:123] Gathering logs for etcd [dcb43861e68c] ...
	I1007 05:29:27.135157   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcb43861e68c"
	I1007 05:29:27.148881   13060 logs.go:123] Gathering logs for storage-provisioner [94aef91365df] ...
	I1007 05:29:27.148891   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94aef91365df"
	I1007 05:29:27.160398   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:29:27.160411   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:29:27.199041   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:29:27.199133   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:29:27.199473   13060 logs.go:123] Gathering logs for kube-scheduler [a7744cef9eab] ...
	I1007 05:29:27.199478   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7744cef9eab"
	I1007 05:29:27.211627   13060 logs.go:123] Gathering logs for kube-proxy [aba6d1211c82] ...
	I1007 05:29:27.211637   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aba6d1211c82"
	I1007 05:29:27.223488   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:29:27.223496   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:29:27.247959   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:29:27.247966   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:29:27.252546   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:29:27.252556   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:29:27.252588   13060 out.go:270] X Problems detected in kubelet:
	W1007 05:29:27.252593   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:29:27.252596   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:29:27.252599   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:29:27.252604   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:29:27.065565   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:27.065581   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:32.066091   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:32.066155   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:37.256627   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:37.067003   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:37.067050   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:42.259188   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:42.259698   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:29:42.297890   13060 logs.go:282] 2 containers: [eba0cb217bc2 8269cd6065ba]
	I1007 05:29:42.298036   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:29:42.318277   13060 logs.go:282] 2 containers: [dcb43861e68c 89902f34c603]
	I1007 05:29:42.318392   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:29:42.333698   13060 logs.go:282] 1 containers: [1a6f70326ce2]
	I1007 05:29:42.333789   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:29:42.346521   13060 logs.go:282] 2 containers: [a7744cef9eab 349aeab911b4]
	I1007 05:29:42.346602   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:29:42.358146   13060 logs.go:282] 1 containers: [aba6d1211c82]
	I1007 05:29:42.358226   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:29:42.368981   13060 logs.go:282] 2 containers: [a035f9be42f8 cff18c2b3cd6]
	I1007 05:29:42.369062   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:29:42.380727   13060 logs.go:282] 0 containers: []
	W1007 05:29:42.380740   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:29:42.380804   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:29:42.394331   13060 logs.go:282] 2 containers: [574be28e75fb 94aef91365df]
	I1007 05:29:42.394349   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:29:42.394355   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:29:42.398715   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:29:42.398722   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:29:42.433688   13060 logs.go:123] Gathering logs for etcd [dcb43861e68c] ...
	I1007 05:29:42.433701   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dcb43861e68c"
	I1007 05:29:42.448245   13060 logs.go:123] Gathering logs for coredns [1a6f70326ce2] ...
	I1007 05:29:42.448255   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a6f70326ce2"
	I1007 05:29:42.459609   13060 logs.go:123] Gathering logs for kube-scheduler [a7744cef9eab] ...
	I1007 05:29:42.459621   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7744cef9eab"
	I1007 05:29:42.471280   13060 logs.go:123] Gathering logs for storage-provisioner [574be28e75fb] ...
	I1007 05:29:42.471295   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 574be28e75fb"
	I1007 05:29:42.482856   13060 logs.go:123] Gathering logs for kube-apiserver [eba0cb217bc2] ...
	I1007 05:29:42.482867   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eba0cb217bc2"
	I1007 05:29:42.497392   13060 logs.go:123] Gathering logs for kube-controller-manager [a035f9be42f8] ...
	I1007 05:29:42.497405   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a035f9be42f8"
	I1007 05:29:42.514642   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:29:42.514655   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:29:42.554570   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:29:42.554661   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:29:42.555016   13060 logs.go:123] Gathering logs for kube-scheduler [349aeab911b4] ...
	I1007 05:29:42.555022   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 349aeab911b4"
	I1007 05:29:42.566963   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:29:42.566973   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:29:42.590946   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:29:42.590957   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:29:42.604293   13060 logs.go:123] Gathering logs for kube-apiserver [8269cd6065ba] ...
	I1007 05:29:42.604307   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8269cd6065ba"
	I1007 05:29:42.623726   13060 logs.go:123] Gathering logs for etcd [89902f34c603] ...
	I1007 05:29:42.623739   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 89902f34c603"
	I1007 05:29:42.641896   13060 logs.go:123] Gathering logs for kube-proxy [aba6d1211c82] ...
	I1007 05:29:42.641910   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aba6d1211c82"
	I1007 05:29:42.654778   13060 logs.go:123] Gathering logs for kube-controller-manager [cff18c2b3cd6] ...
	I1007 05:29:42.654788   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cff18c2b3cd6"
	I1007 05:29:42.666048   13060 logs.go:123] Gathering logs for storage-provisioner [94aef91365df] ...
	I1007 05:29:42.666058   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 94aef91365df"
	I1007 05:29:42.679177   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:29:42.679188   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:29:42.679216   13060 out.go:270] X Problems detected in kubelet:
	W1007 05:29:42.679221   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:29:42.679225   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:29:42.679229   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:29:42.679232   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:29:42.068225   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:42.068269   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:47.068644   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:47.068685   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:52.683194   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:52.070186   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:52.070232   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:57.685585   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:57.685807   13060 kubeadm.go:597] duration metric: took 4m8.055734083s to restartPrimaryControlPlane
	W1007 05:29:57.686019   13060 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 05:29:57.686098   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1007 05:29:58.733517   13060 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.047420334s)
	I1007 05:29:58.733589   13060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 05:29:58.738812   13060 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 05:29:58.741814   13060 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 05:29:58.744580   13060 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 05:29:58.744588   13060 kubeadm.go:157] found existing configuration files:
	
	I1007 05:29:58.744620   13060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52242 /etc/kubernetes/admin.conf
	I1007 05:29:58.746966   13060 kubeadm.go:163] "https://control-plane.minikube.internal:52242" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52242 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 05:29:58.746993   13060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 05:29:58.749947   13060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52242 /etc/kubernetes/kubelet.conf
	I1007 05:29:58.752981   13060 kubeadm.go:163] "https://control-plane.minikube.internal:52242" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52242 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 05:29:58.753011   13060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 05:29:58.755624   13060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52242 /etc/kubernetes/controller-manager.conf
	I1007 05:29:58.758159   13060 kubeadm.go:163] "https://control-plane.minikube.internal:52242" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52242 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 05:29:58.758188   13060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 05:29:58.761311   13060 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52242 /etc/kubernetes/scheduler.conf
	I1007 05:29:58.764129   13060 kubeadm.go:163] "https://control-plane.minikube.internal:52242" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52242 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 05:29:58.764160   13060 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 05:29:58.766719   13060 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 05:29:58.785904   13060 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1007 05:29:58.786018   13060 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 05:29:58.834880   13060 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 05:29:58.834943   13060 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 05:29:58.835010   13060 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 05:29:58.892538   13060 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 05:29:58.896528   13060 out.go:235]   - Generating certificates and keys ...
	I1007 05:29:58.896563   13060 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 05:29:58.896597   13060 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 05:29:58.896645   13060 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 05:29:58.896678   13060 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 05:29:58.896717   13060 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 05:29:58.896743   13060 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 05:29:58.896778   13060 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 05:29:58.896808   13060 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 05:29:58.896859   13060 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 05:29:58.896894   13060 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 05:29:58.896911   13060 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 05:29:58.896943   13060 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 05:29:59.400631   13060 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 05:29:59.444892   13060 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 05:29:59.494129   13060 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 05:29:59.556451   13060 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 05:29:59.585391   13060 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 05:29:59.585678   13060 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 05:29:59.585792   13060 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 05:29:59.679411   13060 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 05:29:57.072238   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:57.072277   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:59.683595   13060 out.go:235]   - Booting up control plane ...
	I1007 05:29:59.683637   13060 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 05:29:59.683678   13060 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 05:29:59.683785   13060 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 05:29:59.684006   13060 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 05:29:59.684990   13060 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 05:30:02.072906   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:02.072943   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:04.688774   13060 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.003350 seconds
	I1007 05:30:04.688879   13060 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 05:30:04.694347   13060 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 05:30:05.202980   13060 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 05:30:05.203095   13060 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-494000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 05:30:05.707870   13060 kubeadm.go:310] [bootstrap-token] Using token: okg04l.5oe8mopp6o37senu
	I1007 05:30:05.711896   13060 out.go:235]   - Configuring RBAC rules ...
	I1007 05:30:05.711966   13060 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 05:30:05.712020   13060 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 05:30:05.718701   13060 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 05:30:05.719605   13060 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 05:30:05.720591   13060 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 05:30:05.721448   13060 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 05:30:05.725831   13060 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 05:30:05.894562   13060 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 05:30:06.113640   13060 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 05:30:06.114086   13060 kubeadm.go:310] 
	I1007 05:30:06.114122   13060 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 05:30:06.114130   13060 kubeadm.go:310] 
	I1007 05:30:06.114168   13060 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 05:30:06.114237   13060 kubeadm.go:310] 
	I1007 05:30:06.114258   13060 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 05:30:06.114302   13060 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 05:30:06.114336   13060 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 05:30:06.114340   13060 kubeadm.go:310] 
	I1007 05:30:06.114366   13060 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 05:30:06.114370   13060 kubeadm.go:310] 
	I1007 05:30:06.114397   13060 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 05:30:06.114403   13060 kubeadm.go:310] 
	I1007 05:30:06.114428   13060 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 05:30:06.114517   13060 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 05:30:06.114609   13060 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 05:30:06.114615   13060 kubeadm.go:310] 
	I1007 05:30:06.114670   13060 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 05:30:06.114739   13060 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 05:30:06.114754   13060 kubeadm.go:310] 
	I1007 05:30:06.114805   13060 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token okg04l.5oe8mopp6o37senu \
	I1007 05:30:06.114876   13060 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a062c1d11feacd55c1665e5cde1180fa46a0cb1088d7ea40ca5bcc8cf3f8fe8c \
	I1007 05:30:06.114889   13060 kubeadm.go:310] 	--control-plane 
	I1007 05:30:06.114891   13060 kubeadm.go:310] 
	I1007 05:30:06.114949   13060 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 05:30:06.114955   13060 kubeadm.go:310] 
	I1007 05:30:06.114998   13060 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token okg04l.5oe8mopp6o37senu \
	I1007 05:30:06.115116   13060 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a062c1d11feacd55c1665e5cde1180fa46a0cb1088d7ea40ca5bcc8cf3f8fe8c 
	I1007 05:30:06.115202   13060 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 05:30:06.115210   13060 cni.go:84] Creating CNI manager for ""
	I1007 05:30:06.115217   13060 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:30:06.119531   13060 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 05:30:06.126643   13060 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 05:30:06.130319   13060 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 05:30:06.135426   13060 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 05:30:06.135490   13060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 05:30:06.135523   13060 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-494000 minikube.k8s.io/updated_at=2024_10_07T05_30_06_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=running-upgrade-494000 minikube.k8s.io/primary=true
	I1007 05:30:06.140637   13060 ops.go:34] apiserver oom_adj: -16
	I1007 05:30:06.168622   13060 kubeadm.go:1113] duration metric: took 33.183417ms to wait for elevateKubeSystemPrivileges
	I1007 05:30:06.182798   13060 kubeadm.go:394] duration metric: took 4m16.57003675s to StartCluster
	I1007 05:30:06.182818   13060 settings.go:142] acquiring lock: {Name:mk5a4e22b238c18e7ccc84c412018fc85088176f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:30:06.182999   13060 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:30:06.183361   13060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/kubeconfig: {Name:mkfa460adb077498749c83f32a682247504db19f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:30:06.183546   13060 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:30:06.183646   13060 config.go:182] Loaded profile config "running-upgrade-494000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:30:06.183580   13060 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 05:30:06.183675   13060 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-494000"
	I1007 05:30:06.183677   13060 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-494000"
	I1007 05:30:06.183682   13060 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-494000"
	W1007 05:30:06.183686   13060 addons.go:243] addon storage-provisioner should already be in state true
	I1007 05:30:06.183699   13060 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-494000"
	I1007 05:30:06.183699   13060 host.go:66] Checking if "running-upgrade-494000" exists ...
	I1007 05:30:06.184667   13060 kapi.go:59] client config for running-upgrade-494000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/running-upgrade-494000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/running-upgrade-494000/client.key", CAFile:"/Users/jenkins/minikube-integration/18424-10771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1063c7ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 05:30:06.184790   13060 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-494000"
	W1007 05:30:06.184795   13060 addons.go:243] addon default-storageclass should already be in state true
	I1007 05:30:06.184801   13060 host.go:66] Checking if "running-upgrade-494000" exists ...
	I1007 05:30:06.186465   13060 out.go:177] * Verifying Kubernetes components...
	I1007 05:30:06.186803   13060 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 05:30:06.190560   13060 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 05:30:06.190583   13060 sshutil.go:53] new ssh client: &{IP:localhost Port:52210 SSHKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/running-upgrade-494000/id_rsa Username:docker}
	I1007 05:30:06.194462   13060 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:30:06.198529   13060 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:30:06.202633   13060 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 05:30:06.202648   13060 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 05:30:06.202663   13060 sshutil.go:53] new ssh client: &{IP:localhost Port:52210 SSHKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/running-upgrade-494000/id_rsa Username:docker}
	I1007 05:30:06.281198   13060 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 05:30:06.286562   13060 api_server.go:52] waiting for apiserver process to appear ...
	I1007 05:30:06.286618   13060 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 05:30:06.291162   13060 api_server.go:72] duration metric: took 107.606417ms to wait for apiserver process to appear ...
	I1007 05:30:06.291172   13060 api_server.go:88] waiting for apiserver healthz status ...
	I1007 05:30:06.291180   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:06.296411   13060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 05:30:06.316856   13060 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 05:30:06.661944   13060 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 05:30:06.661956   13060 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 05:30:07.075138   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:07.075320   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:30:07.087291   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:30:07.087380   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:30:07.097777   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:30:07.097862   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:30:07.108310   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:30:07.108392   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:30:07.118706   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:30:07.118797   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:30:07.129444   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:30:07.129526   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:30:07.140466   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:30:07.140546   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:30:07.151119   13189 logs.go:282] 0 containers: []
	W1007 05:30:07.151128   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:30:07.151189   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:30:07.161716   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:30:07.161740   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:30:07.161745   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:30:07.173463   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:30:07.173473   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:30:07.190906   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:30:07.190915   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:30:07.205151   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:30:07.205162   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:30:07.247174   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:30:07.247183   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:30:07.261704   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:30:07.261715   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:30:07.273912   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:30:07.273923   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:30:07.300166   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:30:07.300173   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:30:07.408768   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:30:07.408782   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:30:07.423363   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:30:07.423373   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:30:07.434816   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:30:07.434826   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:30:07.446612   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:30:07.446623   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:30:07.483726   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:30:07.483736   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:30:07.487762   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:30:07.487769   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:30:07.502501   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:30:07.502510   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:30:07.518761   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:30:07.518773   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:30:07.537186   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:30:07.537198   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:30:10.050508   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:11.293182   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:11.293222   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:15.052800   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:15.053090   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:30:15.075807   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:30:15.075923   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:30:15.092410   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:30:15.092495   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:30:15.104401   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:30:15.104484   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:30:15.115406   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:30:15.115498   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:30:15.126807   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:30:15.126878   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:30:15.137368   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:30:15.137471   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:30:15.147966   13189 logs.go:282] 0 containers: []
	W1007 05:30:15.147980   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:30:15.148046   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:30:15.158561   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:30:15.158586   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:30:15.158592   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:30:15.173013   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:30:15.173025   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:30:15.187333   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:30:15.187347   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:30:15.198536   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:30:15.198546   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:30:15.210734   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:30:15.210746   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:30:15.235855   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:30:15.235863   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:30:15.272699   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:30:15.272711   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:30:15.296050   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:30:15.296060   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:30:15.307710   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:30:15.307723   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:30:15.318329   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:30:15.318344   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:30:15.357028   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:30:15.357042   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:30:15.371407   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:30:15.371420   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:30:15.385091   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:30:15.385107   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:30:15.397211   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:30:15.397226   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:30:15.401753   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:30:15.401760   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:30:15.439522   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:30:15.439533   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:30:15.455053   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:30:15.455064   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:30:16.293450   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:16.293473   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:17.974324   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:21.293667   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:21.293685   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:22.976473   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:22.976702   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:30:23.000424   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:30:23.000535   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:30:23.020042   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:30:23.020132   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:30:23.032108   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:30:23.032188   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:30:23.042597   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:30:23.042677   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:30:23.054222   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:30:23.054312   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:30:23.065036   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:30:23.065112   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:30:23.075853   13189 logs.go:282] 0 containers: []
	W1007 05:30:23.075864   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:30:23.075935   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:30:23.087961   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:30:23.087979   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:30:23.087984   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:30:23.099127   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:30:23.099138   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:30:23.110305   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:30:23.110319   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:30:23.114350   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:30:23.114357   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:30:23.151365   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:30:23.151377   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:30:23.163140   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:30:23.163151   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:30:23.174743   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:30:23.174752   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:30:23.191659   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:30:23.191669   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:30:23.204793   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:30:23.204808   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:30:23.242721   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:30:23.242732   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:30:23.266087   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:30:23.266097   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:30:23.277837   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:30:23.277848   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:30:23.292533   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:30:23.292544   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:30:23.306384   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:30:23.306396   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:30:23.324072   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:30:23.324082   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:30:23.360585   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:30:23.360596   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:30:23.377710   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:30:23.377724   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:30:26.293979   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:26.294021   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:25.892022   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:31.294907   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:31.294926   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:30.894612   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:30.894778   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:30:30.911313   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:30:30.911401   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:30:30.927879   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:30:30.927951   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:30:30.941039   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:30:30.941118   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:30:30.951865   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:30:30.951965   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:30:30.962366   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:30:30.962432   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:30:30.973340   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:30:30.973421   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:30:30.983508   13189 logs.go:282] 0 containers: []
	W1007 05:30:30.983520   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:30:30.983598   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:30:30.994280   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:30:30.994298   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:30:30.994312   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:30:31.009163   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:30:31.009179   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:30:31.020902   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:30:31.020913   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:30:31.037761   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:30:31.037770   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:30:31.049978   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:30:31.049988   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:30:31.074757   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:30:31.074764   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:30:31.110999   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:30:31.111009   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:30:31.122704   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:30:31.122718   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:30:31.144802   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:30:31.144813   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:30:31.158230   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:30:31.158244   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:30:31.172460   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:30:31.172473   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:30:31.177329   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:30:31.177339   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:30:31.216010   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:30:31.216033   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:30:31.230853   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:30:31.230866   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:30:31.268372   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:30:31.268383   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:30:31.279471   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:30:31.279496   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:30:31.292070   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:30:31.292081   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:30:33.806002   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:36.295583   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:36.295639   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1007 05:30:36.663863   13060 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1007 05:30:36.674203   13060 out.go:177] * Enabled addons: storage-provisioner
	I1007 05:30:36.681162   13060 addons.go:510] duration metric: took 30.498146625s for enable addons: enabled=[storage-provisioner]
	I1007 05:30:38.808338   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:38.808558   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:30:38.830976   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:30:38.831068   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:30:38.845603   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:30:38.845685   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:30:38.856510   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:30:38.856587   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:30:38.867208   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:30:38.867302   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:30:38.877761   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:30:38.877850   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:30:38.888726   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:30:38.888810   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:30:38.898625   13189 logs.go:282] 0 containers: []
	W1007 05:30:38.898640   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:30:38.898707   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:30:38.909585   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:30:38.909606   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:30:38.909612   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:30:38.949167   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:30:38.949177   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:30:38.963086   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:30:38.963097   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:30:39.000697   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:30:39.000707   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:30:39.012391   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:30:39.012402   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:30:39.023729   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:30:39.023740   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:30:39.061682   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:30:39.061697   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:30:39.076131   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:30:39.076143   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:30:39.092093   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:30:39.092103   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:30:39.105516   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:30:39.105532   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:30:39.129653   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:30:39.129662   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:30:39.149607   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:30:39.149620   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:30:39.161821   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:30:39.161837   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:30:39.166160   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:30:39.166168   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:30:39.180674   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:30:39.180685   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:30:39.196668   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:30:39.196681   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:30:39.216860   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:30:39.216870   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:30:41.296638   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:41.296698   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:41.730942   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:46.298000   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:46.298042   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:46.733188   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:46.733479   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:30:46.760639   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:30:46.760805   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:30:46.778513   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:30:46.778624   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:30:46.793509   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:30:46.793601   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:30:46.804989   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:30:46.805069   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:30:46.815474   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:30:46.815550   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:30:46.825751   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:30:46.825830   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:30:46.836227   13189 logs.go:282] 0 containers: []
	W1007 05:30:46.836240   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:30:46.836304   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:30:46.850731   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:30:46.850749   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:30:46.850755   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:30:46.862372   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:30:46.862386   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:30:46.875824   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:30:46.875838   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:30:46.887676   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:30:46.887688   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:30:46.912317   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:30:46.912325   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:30:46.950713   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:30:46.950725   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:30:46.962115   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:30:46.962125   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:30:46.976814   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:30:46.976828   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:30:46.989226   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:30:46.989238   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:30:47.028055   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:30:47.028074   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:30:47.042207   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:30:47.042221   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:30:47.058777   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:30:47.058788   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:30:47.076334   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:30:47.076345   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:30:47.091075   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:30:47.091087   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:30:47.102672   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:30:47.102682   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:30:47.115111   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:30:47.115124   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:30:47.119374   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:30:47.119381   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:30:49.659369   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:51.299644   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:51.299682   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:54.661502   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:54.661838   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:30:54.688523   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:30:54.688670   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:30:54.708795   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:30:54.708896   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:30:54.723717   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:30:54.723793   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:30:54.734597   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:30:54.734676   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:30:54.745704   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:30:54.745783   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:30:54.756784   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:30:54.756862   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:30:54.767324   13189 logs.go:282] 0 containers: []
	W1007 05:30:54.767335   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:30:54.767402   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:30:54.777784   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:30:54.777801   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:30:54.777807   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:30:54.782167   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:30:54.782174   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:30:54.795220   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:30:54.795231   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:30:54.806774   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:30:54.806787   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:30:54.819899   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:30:54.819911   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:30:54.845298   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:30:54.845306   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:30:54.858515   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:30:54.858526   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:30:54.879272   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:30:54.879288   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:30:54.918904   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:30:54.918916   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:30:54.933951   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:30:54.933965   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:30:54.948233   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:30:54.948244   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:30:54.962807   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:30:54.962818   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:30:54.979013   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:30:54.979023   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:30:54.993313   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:30:54.993324   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:30:55.027749   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:30:55.027764   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:30:55.065221   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:30:55.065231   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:30:55.076454   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:30:55.076464   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:30:56.301756   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:56.301777   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:57.595719   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:01.303828   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:01.303852   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:02.597910   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:02.598096   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:31:02.614993   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:31:02.615092   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:31:02.631321   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:31:02.631407   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:31:02.641982   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:31:02.642058   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:31:02.652418   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:31:02.652494   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:31:02.662914   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:31:02.662987   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:31:02.673815   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:31:02.673891   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:31:02.684368   13189 logs.go:282] 0 containers: []
	W1007 05:31:02.684388   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:31:02.684452   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:31:02.698221   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:31:02.698238   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:31:02.698243   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:31:02.722017   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:31:02.722025   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:31:02.733582   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:31:02.733597   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:31:02.772005   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:31:02.772014   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:31:02.786264   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:31:02.786274   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:31:02.803998   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:31:02.804009   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:31:02.818553   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:31:02.818562   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:31:02.830058   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:31:02.830068   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:31:02.845339   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:31:02.845349   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:31:02.883648   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:31:02.883660   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:31:02.895504   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:31:02.895517   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:31:02.907076   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:31:02.907087   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:31:02.919864   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:31:02.919876   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:31:02.933658   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:31:02.933671   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:31:02.949687   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:31:02.949700   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:31:02.953812   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:31:02.953819   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:31:02.996431   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:31:02.996442   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:31:05.514396   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:06.305993   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:06.306180   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:31:06.317184   13060 logs.go:282] 1 containers: [a298c7336892]
	I1007 05:31:06.317258   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:31:06.327846   13060 logs.go:282] 1 containers: [fde5262f7106]
	I1007 05:31:06.327927   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:31:06.338274   13060 logs.go:282] 2 containers: [b87a93b50113 205a727ddd11]
	I1007 05:31:06.338344   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:31:06.349274   13060 logs.go:282] 1 containers: [586c842835b6]
	I1007 05:31:06.349357   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:31:06.363742   13060 logs.go:282] 1 containers: [6ddb8e8775db]
	I1007 05:31:06.363816   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:31:06.374396   13060 logs.go:282] 1 containers: [081f9bc5b473]
	I1007 05:31:06.374473   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:31:06.388683   13060 logs.go:282] 0 containers: []
	W1007 05:31:06.388694   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:31:06.388751   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:31:06.399256   13060 logs.go:282] 1 containers: [9c51a5346c6b]
	I1007 05:31:06.399272   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:31:06.399277   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:31:06.410810   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:31:06.410821   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:31:06.430782   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:31:06.430872   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:31:06.446611   13060 logs.go:123] Gathering logs for kube-apiserver [a298c7336892] ...
	I1007 05:31:06.446616   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a298c7336892"
	I1007 05:31:06.461452   13060 logs.go:123] Gathering logs for etcd [fde5262f7106] ...
	I1007 05:31:06.461464   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fde5262f7106"
	I1007 05:31:06.476216   13060 logs.go:123] Gathering logs for coredns [b87a93b50113] ...
	I1007 05:31:06.476225   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87a93b50113"
	I1007 05:31:06.488216   13060 logs.go:123] Gathering logs for coredns [205a727ddd11] ...
	I1007 05:31:06.488228   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205a727ddd11"
	I1007 05:31:06.500220   13060 logs.go:123] Gathering logs for storage-provisioner [9c51a5346c6b] ...
	I1007 05:31:06.500230   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c51a5346c6b"
	I1007 05:31:06.511745   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:31:06.511755   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:31:06.535297   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:31:06.535304   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:31:06.539836   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:31:06.539845   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:31:06.576991   13060 logs.go:123] Gathering logs for kube-scheduler [586c842835b6] ...
	I1007 05:31:06.577002   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586c842835b6"
	I1007 05:31:06.592333   13060 logs.go:123] Gathering logs for kube-proxy [6ddb8e8775db] ...
	I1007 05:31:06.592345   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddb8e8775db"
	I1007 05:31:06.604298   13060 logs.go:123] Gathering logs for kube-controller-manager [081f9bc5b473] ...
	I1007 05:31:06.604308   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081f9bc5b473"
	I1007 05:31:06.622077   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:31:06.622092   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:31:06.622117   13060 out.go:270] X Problems detected in kubelet:
	W1007 05:31:06.622122   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:31:06.622126   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:31:06.622130   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:31:06.622132   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:31:10.516563   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:10.516816   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:31:10.539874   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:31:10.540009   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:31:10.555302   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:31:10.555385   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:31:10.568289   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:31:10.568367   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:31:10.579147   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:31:10.579224   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:31:10.589449   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:31:10.589519   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:31:10.600068   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:31:10.600144   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:31:10.609851   13189 logs.go:282] 0 containers: []
	W1007 05:31:10.609863   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:31:10.609929   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:31:10.620019   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:31:10.620037   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:31:10.620043   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:31:10.659431   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:31:10.659442   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:31:10.674223   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:31:10.674235   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:31:10.691503   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:31:10.691514   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:31:10.703794   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:31:10.703807   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:31:10.715181   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:31:10.715194   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:31:10.719977   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:31:10.719982   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:31:10.757185   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:31:10.757200   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:31:10.771554   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:31:10.771569   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:31:10.787139   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:31:10.787154   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:31:10.805760   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:31:10.805773   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:31:10.819437   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:31:10.819446   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:31:10.833679   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:31:10.833688   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:31:10.845755   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:31:10.845768   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:31:10.885551   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:31:10.885563   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:31:10.899418   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:31:10.899430   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:31:10.922810   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:31:10.922817   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:31:13.437777   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:16.626102   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:18.438961   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:18.439103   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:31:18.453018   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:31:18.453104   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:31:18.465095   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:31:18.465176   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:31:18.475349   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:31:18.475429   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:31:18.486165   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:31:18.486250   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:31:18.496911   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:31:18.496988   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:31:18.507578   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:31:18.507656   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:31:18.517664   13189 logs.go:282] 0 containers: []
	W1007 05:31:18.517676   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:31:18.517734   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:31:18.527846   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:31:18.527865   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:31:18.527871   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:31:18.532447   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:31:18.532457   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:31:18.544365   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:31:18.544375   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:31:18.556691   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:31:18.556704   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:31:18.568357   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:31:18.568368   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:31:18.585637   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:31:18.585647   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:31:18.597488   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:31:18.597499   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:31:18.611440   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:31:18.611455   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:31:18.651057   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:31:18.651068   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:31:18.669946   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:31:18.669957   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:31:18.685936   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:31:18.685948   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:31:18.720291   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:31:18.720304   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:31:18.759439   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:31:18.759453   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:31:18.773544   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:31:18.773555   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:31:18.786931   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:31:18.786942   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:31:18.798550   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:31:18.798561   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:31:18.821196   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:31:18.821204   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:31:21.628452   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:21.628678   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:31:21.653213   13060 logs.go:282] 1 containers: [a298c7336892]
	I1007 05:31:21.653348   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:31:21.669848   13060 logs.go:282] 1 containers: [fde5262f7106]
	I1007 05:31:21.669942   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:31:21.682990   13060 logs.go:282] 2 containers: [b87a93b50113 205a727ddd11]
	I1007 05:31:21.683075   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:31:21.694711   13060 logs.go:282] 1 containers: [586c842835b6]
	I1007 05:31:21.694785   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:31:21.705014   13060 logs.go:282] 1 containers: [6ddb8e8775db]
	I1007 05:31:21.705092   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:31:21.718216   13060 logs.go:282] 1 containers: [081f9bc5b473]
	I1007 05:31:21.718294   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:31:21.728213   13060 logs.go:282] 0 containers: []
	W1007 05:31:21.728225   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:31:21.728294   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:31:21.738787   13060 logs.go:282] 1 containers: [9c51a5346c6b]
	I1007 05:31:21.738802   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:31:21.738808   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:31:21.774370   13060 logs.go:123] Gathering logs for kube-apiserver [a298c7336892] ...
	I1007 05:31:21.774385   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a298c7336892"
	I1007 05:31:21.789143   13060 logs.go:123] Gathering logs for kube-scheduler [586c842835b6] ...
	I1007 05:31:21.789152   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586c842835b6"
	I1007 05:31:21.803481   13060 logs.go:123] Gathering logs for storage-provisioner [9c51a5346c6b] ...
	I1007 05:31:21.803493   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c51a5346c6b"
	I1007 05:31:21.815497   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:31:21.815507   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:31:21.839832   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:31:21.839839   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:31:21.853024   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:31:21.853037   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:31:21.870592   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:31:21.870685   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:31:21.886969   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:31:21.886977   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:31:21.891524   13060 logs.go:123] Gathering logs for coredns [205a727ddd11] ...
	I1007 05:31:21.891531   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205a727ddd11"
	I1007 05:31:21.903405   13060 logs.go:123] Gathering logs for kube-proxy [6ddb8e8775db] ...
	I1007 05:31:21.903420   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddb8e8775db"
	I1007 05:31:21.915343   13060 logs.go:123] Gathering logs for kube-controller-manager [081f9bc5b473] ...
	I1007 05:31:21.915355   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081f9bc5b473"
	I1007 05:31:21.933001   13060 logs.go:123] Gathering logs for etcd [fde5262f7106] ...
	I1007 05:31:21.933011   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fde5262f7106"
	I1007 05:31:21.946841   13060 logs.go:123] Gathering logs for coredns [b87a93b50113] ...
	I1007 05:31:21.946852   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87a93b50113"
	I1007 05:31:21.965353   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:31:21.965364   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:31:21.965393   13060 out.go:270] X Problems detected in kubelet:
	W1007 05:31:21.965398   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:31:21.965401   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:31:21.965405   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:31:21.965408   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:31:21.334469   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:26.336667   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:26.336846   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:31:26.350817   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:31:26.350907   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:31:26.365021   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:31:26.365102   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:31:26.376216   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:31:26.376290   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:31:26.386869   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:31:26.386950   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:31:26.396952   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:31:26.397027   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:31:26.407192   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:31:26.407261   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:31:26.421844   13189 logs.go:282] 0 containers: []
	W1007 05:31:26.421855   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:31:26.421923   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:31:26.432457   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:31:26.432474   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:31:26.432480   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:31:26.466811   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:31:26.466826   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:31:26.481388   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:31:26.481400   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:31:26.519162   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:31:26.519172   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:31:26.533241   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:31:26.533252   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:31:26.546023   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:31:26.546034   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:31:26.585016   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:31:26.585028   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:31:26.610110   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:31:26.610123   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:31:26.624555   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:31:26.624568   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:31:26.636156   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:31:26.636171   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:31:26.649672   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:31:26.649680   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:31:26.661027   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:31:26.661039   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:31:26.673163   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:31:26.673173   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:31:26.690406   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:31:26.690417   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:31:26.701472   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:31:26.701487   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:31:26.712692   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:31:26.712703   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:31:26.736276   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:31:26.736283   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:31:29.240956   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:31.969369   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:34.243135   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:34.243402   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:31:34.260774   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:31:34.260872   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:31:34.273955   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:31:34.274043   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:31:34.286838   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:31:34.286920   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:31:34.302331   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:31:34.302412   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:31:34.313912   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:31:34.313991   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:31:34.324702   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:31:34.324777   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:31:34.335037   13189 logs.go:282] 0 containers: []
	W1007 05:31:34.335049   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:31:34.335113   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:31:34.345630   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:31:34.345651   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:31:34.345657   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:31:34.357722   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:31:34.357735   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:31:34.392890   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:31:34.392902   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:31:34.407236   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:31:34.407247   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:31:34.423856   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:31:34.423868   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:31:34.438358   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:31:34.438372   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:31:34.450096   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:31:34.450107   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:31:34.463888   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:31:34.463903   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:31:34.479510   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:31:34.479524   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:31:34.494402   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:31:34.494416   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:31:34.505446   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:31:34.505456   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:31:34.529079   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:31:34.529088   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:31:34.533107   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:31:34.533113   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:31:34.570694   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:31:34.570704   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:31:34.585435   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:31:34.585448   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:31:34.622133   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:31:34.622141   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:31:34.639441   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:31:34.639456   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:31:36.971795   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:36.972305   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:31:37.011022   13060 logs.go:282] 1 containers: [a298c7336892]
	I1007 05:31:37.011172   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:31:37.030526   13060 logs.go:282] 1 containers: [fde5262f7106]
	I1007 05:31:37.030650   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:31:37.051419   13060 logs.go:282] 2 containers: [b87a93b50113 205a727ddd11]
	I1007 05:31:37.051502   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:31:37.063244   13060 logs.go:282] 1 containers: [586c842835b6]
	I1007 05:31:37.063328   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:31:37.074545   13060 logs.go:282] 1 containers: [6ddb8e8775db]
	I1007 05:31:37.074629   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:31:37.086086   13060 logs.go:282] 1 containers: [081f9bc5b473]
	I1007 05:31:37.086159   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:31:37.097142   13060 logs.go:282] 0 containers: []
	W1007 05:31:37.097154   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:31:37.097221   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:31:37.108070   13060 logs.go:282] 1 containers: [9c51a5346c6b]
	I1007 05:31:37.108086   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:31:37.108091   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:31:37.112612   13060 logs.go:123] Gathering logs for etcd [fde5262f7106] ...
	I1007 05:31:37.112621   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fde5262f7106"
	I1007 05:31:37.127253   13060 logs.go:123] Gathering logs for coredns [b87a93b50113] ...
	I1007 05:31:37.127263   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87a93b50113"
	I1007 05:31:37.139644   13060 logs.go:123] Gathering logs for coredns [205a727ddd11] ...
	I1007 05:31:37.139656   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205a727ddd11"
	I1007 05:31:37.152628   13060 logs.go:123] Gathering logs for kube-controller-manager [081f9bc5b473] ...
	I1007 05:31:37.152638   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081f9bc5b473"
	I1007 05:31:37.171272   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:31:37.171287   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:31:37.191500   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:31:37.191594   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:31:37.207911   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:31:37.207918   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:31:37.251401   13060 logs.go:123] Gathering logs for kube-apiserver [a298c7336892] ...
	I1007 05:31:37.251411   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a298c7336892"
	I1007 05:31:37.266548   13060 logs.go:123] Gathering logs for kube-scheduler [586c842835b6] ...
	I1007 05:31:37.266563   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586c842835b6"
	I1007 05:31:37.282240   13060 logs.go:123] Gathering logs for kube-proxy [6ddb8e8775db] ...
	I1007 05:31:37.282250   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddb8e8775db"
	I1007 05:31:37.297355   13060 logs.go:123] Gathering logs for storage-provisioner [9c51a5346c6b] ...
	I1007 05:31:37.297371   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c51a5346c6b"
	I1007 05:31:37.313497   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:31:37.313507   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:31:37.338230   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:31:37.338239   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:31:37.349872   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:31:37.349882   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:31:37.349908   13060 out.go:270] X Problems detected in kubelet:
	W1007 05:31:37.349915   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:31:37.349919   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:31:37.349923   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:31:37.349927   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:31:37.152510   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:42.154646   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:42.154802   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:31:42.168101   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:31:42.168188   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:31:42.179799   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:31:42.179875   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:31:42.198063   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:31:42.198140   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:31:42.208761   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:31:42.208837   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:31:42.219922   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:31:42.220007   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:31:42.235089   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:31:42.235165   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:31:42.246817   13189 logs.go:282] 0 containers: []
	W1007 05:31:42.246829   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:31:42.246896   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:31:42.256948   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:31:42.256962   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:31:42.256968   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:31:42.261543   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:31:42.261550   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:31:42.297144   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:31:42.297155   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:31:42.313207   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:31:42.313220   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:31:42.324709   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:31:42.324722   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:31:42.336046   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:31:42.336056   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:31:42.347356   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:31:42.347368   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:31:42.384839   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:31:42.384848   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:31:42.400572   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:31:42.400582   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:31:42.438594   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:31:42.438605   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:31:42.452219   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:31:42.452228   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:31:42.468384   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:31:42.468403   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:31:42.482042   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:31:42.482054   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:31:42.494468   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:31:42.494480   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:31:42.514362   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:31:42.514375   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:31:42.538200   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:31:42.538208   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:31:42.551260   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:31:42.551274   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:31:45.069713   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:47.352650   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:50.071449   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:50.071624   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:31:50.090093   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:31:50.090187   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:31:50.102417   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:31:50.102497   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:31:50.112826   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:31:50.112893   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:31:50.126878   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:31:50.126958   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:31:50.137729   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:31:50.137812   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:31:50.148326   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:31:50.148407   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:31:50.161728   13189 logs.go:282] 0 containers: []
	W1007 05:31:50.161739   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:31:50.161804   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:31:50.172441   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:31:50.172461   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:31:50.172466   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:31:50.184685   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:31:50.184698   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:31:50.196351   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:31:50.196362   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:31:50.208132   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:31:50.208143   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:31:50.220633   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:31:50.220645   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:31:50.243353   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:31:50.243364   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:31:50.262440   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:31:50.262451   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:31:50.280042   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:31:50.280057   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:31:50.316781   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:31:50.316792   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:31:50.320934   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:31:50.320940   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:31:50.335411   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:31:50.335421   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:31:50.374546   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:31:50.374559   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:31:50.390162   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:31:50.390172   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:31:50.404499   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:31:50.404509   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:31:50.419703   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:31:50.419712   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:31:50.443228   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:31:50.443236   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:31:50.481203   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:31:50.481215   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:31:52.355112   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:52.355385   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:31:52.373562   13060 logs.go:282] 1 containers: [a298c7336892]
	I1007 05:31:52.373651   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:31:52.387077   13060 logs.go:282] 1 containers: [fde5262f7106]
	I1007 05:31:52.387160   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:31:52.398172   13060 logs.go:282] 2 containers: [b87a93b50113 205a727ddd11]
	I1007 05:31:52.398244   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:31:52.408985   13060 logs.go:282] 1 containers: [586c842835b6]
	I1007 05:31:52.409062   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:31:52.420058   13060 logs.go:282] 1 containers: [6ddb8e8775db]
	I1007 05:31:52.420132   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:31:52.431041   13060 logs.go:282] 1 containers: [081f9bc5b473]
	I1007 05:31:52.431115   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:31:52.441405   13060 logs.go:282] 0 containers: []
	W1007 05:31:52.441417   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:31:52.441476   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:31:52.452192   13060 logs.go:282] 1 containers: [9c51a5346c6b]
	I1007 05:31:52.452210   13060 logs.go:123] Gathering logs for kube-apiserver [a298c7336892] ...
	I1007 05:31:52.452215   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a298c7336892"
	I1007 05:31:52.466706   13060 logs.go:123] Gathering logs for etcd [fde5262f7106] ...
	I1007 05:31:52.466717   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fde5262f7106"
	I1007 05:31:52.480973   13060 logs.go:123] Gathering logs for coredns [b87a93b50113] ...
	I1007 05:31:52.480982   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87a93b50113"
	I1007 05:31:52.493767   13060 logs.go:123] Gathering logs for coredns [205a727ddd11] ...
	I1007 05:31:52.493776   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205a727ddd11"
	I1007 05:31:52.505736   13060 logs.go:123] Gathering logs for kube-controller-manager [081f9bc5b473] ...
	I1007 05:31:52.505749   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081f9bc5b473"
	I1007 05:31:52.523336   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:31:52.523346   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:31:52.528948   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:31:52.528958   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:31:52.565539   13060 logs.go:123] Gathering logs for kube-proxy [6ddb8e8775db] ...
	I1007 05:31:52.565555   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddb8e8775db"
	I1007 05:31:52.578373   13060 logs.go:123] Gathering logs for storage-provisioner [9c51a5346c6b] ...
	I1007 05:31:52.578389   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c51a5346c6b"
	I1007 05:31:52.591016   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:31:52.591027   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:31:52.615386   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:31:52.615394   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:31:52.627632   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:31:52.627644   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:31:52.646858   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:31:52.646949   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:31:52.662501   13060 logs.go:123] Gathering logs for kube-scheduler [586c842835b6] ...
	I1007 05:31:52.662508   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586c842835b6"
	I1007 05:31:52.678244   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:31:52.678257   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:31:52.678282   13060 out.go:270] X Problems detected in kubelet:
	W1007 05:31:52.678286   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:31:52.678290   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:31:52.678294   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:31:52.678297   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:31:52.995360   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:57.997610   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:57.997852   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:31:58.014410   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:31:58.014506   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:31:58.026826   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:31:58.026908   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:31:58.038462   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:31:58.038544   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:31:58.049482   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:31:58.049557   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:31:58.059957   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:31:58.060033   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:31:58.071039   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:31:58.071122   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:31:58.086524   13189 logs.go:282] 0 containers: []
	W1007 05:31:58.086535   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:31:58.086599   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:31:58.097405   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:31:58.097422   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:31:58.097426   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:31:58.120794   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:31:58.120801   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:31:58.132526   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:31:58.132538   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:31:58.136971   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:31:58.136977   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:31:58.150772   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:31:58.150783   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:31:58.164876   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:31:58.164886   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:31:58.178944   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:31:58.178956   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:31:58.190615   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:31:58.190626   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:31:58.202028   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:31:58.202040   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:31:58.238821   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:31:58.238829   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:31:58.251318   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:31:58.251328   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:31:58.268447   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:31:58.268457   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:31:58.279529   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:31:58.279539   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:31:58.314342   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:31:58.314353   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:31:58.352546   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:31:58.352556   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:31:58.368390   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:31:58.368402   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:31:58.381812   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:31:58.381827   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:32:02.680627   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:32:00.895289   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:32:07.682883   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:32:07.683008   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:32:07.696173   13060 logs.go:282] 1 containers: [a298c7336892]
	I1007 05:32:07.696260   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:32:07.707712   13060 logs.go:282] 1 containers: [fde5262f7106]
	I1007 05:32:07.707793   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:32:07.721567   13060 logs.go:282] 2 containers: [b87a93b50113 205a727ddd11]
	I1007 05:32:07.721645   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:32:07.732560   13060 logs.go:282] 1 containers: [586c842835b6]
	I1007 05:32:07.732624   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:32:07.743533   13060 logs.go:282] 1 containers: [6ddb8e8775db]
	I1007 05:32:07.743608   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:32:07.754461   13060 logs.go:282] 1 containers: [081f9bc5b473]
	I1007 05:32:07.754543   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:32:07.764686   13060 logs.go:282] 0 containers: []
	W1007 05:32:07.764701   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:32:07.764760   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:32:07.775609   13060 logs.go:282] 1 containers: [9c51a5346c6b]
	I1007 05:32:07.775622   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:32:07.775627   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:32:07.787427   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:32:07.787436   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:32:07.792386   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:32:07.792392   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:32:07.829199   13060 logs.go:123] Gathering logs for kube-apiserver [a298c7336892] ...
	I1007 05:32:07.829214   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a298c7336892"
	I1007 05:32:07.843917   13060 logs.go:123] Gathering logs for etcd [fde5262f7106] ...
	I1007 05:32:07.843930   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fde5262f7106"
	I1007 05:32:07.857922   13060 logs.go:123] Gathering logs for coredns [205a727ddd11] ...
	I1007 05:32:07.857931   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205a727ddd11"
	I1007 05:32:07.870149   13060 logs.go:123] Gathering logs for kube-proxy [6ddb8e8775db] ...
	I1007 05:32:07.870160   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddb8e8775db"
	I1007 05:32:07.882924   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:32:07.882935   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:32:07.908667   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:32:07.908679   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:32:07.928570   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:32:07.928666   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:32:07.944662   13060 logs.go:123] Gathering logs for coredns [b87a93b50113] ...
	I1007 05:32:07.944672   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87a93b50113"
	I1007 05:32:07.957117   13060 logs.go:123] Gathering logs for kube-scheduler [586c842835b6] ...
	I1007 05:32:07.957129   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586c842835b6"
	I1007 05:32:07.975255   13060 logs.go:123] Gathering logs for kube-controller-manager [081f9bc5b473] ...
	I1007 05:32:07.975265   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081f9bc5b473"
	I1007 05:32:07.993597   13060 logs.go:123] Gathering logs for storage-provisioner [9c51a5346c6b] ...
	I1007 05:32:07.993607   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c51a5346c6b"
	I1007 05:32:08.006583   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:32:08.006597   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:32:08.006626   13060 out.go:270] X Problems detected in kubelet:
	W1007 05:32:08.006631   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:32:08.006635   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:32:08.006640   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:32:08.006643   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:32:05.897879   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:32:05.898106   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:32:05.920661   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:32:05.920767   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:32:05.936174   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:32:05.936256   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:32:05.948573   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:32:05.948653   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:32:05.963426   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:32:05.963513   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:32:05.974064   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:32:05.974144   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:32:05.984932   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:32:05.985007   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:32:05.994987   13189 logs.go:282] 0 containers: []
	W1007 05:32:05.995002   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:32:05.995069   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:32:06.005543   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:32:06.005561   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:32:06.005566   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:32:06.043176   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:32:06.043185   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:32:06.057339   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:32:06.057349   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:32:06.068421   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:32:06.068433   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:32:06.084164   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:32:06.084175   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:32:06.123780   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:32:06.123795   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:32:06.140086   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:32:06.140102   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:32:06.157002   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:32:06.157015   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:32:06.174660   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:32:06.174674   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:32:06.193999   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:32:06.194012   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:32:06.205963   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:32:06.205974   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:32:06.217218   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:32:06.217232   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:32:06.234739   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:32:06.234752   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:32:06.239175   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:32:06.239184   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:32:06.274872   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:32:06.274883   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:32:06.286298   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:32:06.286307   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:32:06.300306   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:32:06.300317   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:32:08.826569   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:32:13.827757   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:32:13.827934   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:32:13.842072   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:32:13.842161   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:32:13.853219   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:32:13.853312   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:32:13.863745   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:32:13.863826   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:32:13.875046   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:32:13.875124   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:32:13.885544   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:32:13.885620   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:32:13.896104   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:32:13.896191   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:32:13.906501   13189 logs.go:282] 0 containers: []
	W1007 05:32:13.906514   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:32:13.906572   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:32:13.916699   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:32:13.916717   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:32:13.916724   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:32:13.931096   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:32:13.931107   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:32:13.943351   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:32:13.943364   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:32:13.963321   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:32:13.963338   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:32:13.976822   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:32:13.976832   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:32:14.000852   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:32:14.000861   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:32:14.034600   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:32:14.034611   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:32:14.047532   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:32:14.047542   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:32:14.059067   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:32:14.059080   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:32:14.070882   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:32:14.070892   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:32:14.094408   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:32:14.094422   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:32:14.106659   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:32:14.106674   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:32:14.146102   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:32:14.146109   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:32:14.150165   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:32:14.150174   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:32:14.168873   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:32:14.168886   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:32:14.208492   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:32:14.208506   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:32:14.223448   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:32:14.223459   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:32:18.010613   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:32:16.737521   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:32:23.012791   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:32:23.013045   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:32:23.035924   13060 logs.go:282] 1 containers: [a298c7336892]
	I1007 05:32:23.036056   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:32:23.052207   13060 logs.go:282] 1 containers: [fde5262f7106]
	I1007 05:32:23.052303   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:32:23.065083   13060 logs.go:282] 4 containers: [d57fd0f376b8 33e114d3b8f3 b87a93b50113 205a727ddd11]
	I1007 05:32:23.065166   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:32:23.075958   13060 logs.go:282] 1 containers: [586c842835b6]
	I1007 05:32:23.076036   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:32:23.088554   13060 logs.go:282] 1 containers: [6ddb8e8775db]
	I1007 05:32:23.088639   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:32:23.099467   13060 logs.go:282] 1 containers: [081f9bc5b473]
	I1007 05:32:23.099548   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:32:23.109797   13060 logs.go:282] 0 containers: []
	W1007 05:32:23.109809   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:32:23.109878   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:32:23.120521   13060 logs.go:282] 1 containers: [9c51a5346c6b]
	I1007 05:32:23.120541   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:32:23.120547   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:32:23.138042   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:32:23.138134   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:32:23.153825   13060 logs.go:123] Gathering logs for coredns [b87a93b50113] ...
	I1007 05:32:23.153832   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87a93b50113"
	I1007 05:32:23.165399   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:32:23.165410   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:32:23.189828   13060 logs.go:123] Gathering logs for kube-proxy [6ddb8e8775db] ...
	I1007 05:32:23.189836   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddb8e8775db"
	I1007 05:32:23.206766   13060 logs.go:123] Gathering logs for etcd [fde5262f7106] ...
	I1007 05:32:23.206775   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fde5262f7106"
	I1007 05:32:23.220571   13060 logs.go:123] Gathering logs for kube-controller-manager [081f9bc5b473] ...
	I1007 05:32:23.220587   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081f9bc5b473"
	I1007 05:32:23.237946   13060 logs.go:123] Gathering logs for storage-provisioner [9c51a5346c6b] ...
	I1007 05:32:23.237956   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c51a5346c6b"
	I1007 05:32:23.249692   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:32:23.249702   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:32:23.261805   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:32:23.261814   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:32:23.266451   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:32:23.266457   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:32:23.302577   13060 logs.go:123] Gathering logs for kube-apiserver [a298c7336892] ...
	I1007 05:32:23.302590   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a298c7336892"
	I1007 05:32:23.317552   13060 logs.go:123] Gathering logs for coredns [d57fd0f376b8] ...
	I1007 05:32:23.317563   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57fd0f376b8"
	I1007 05:32:23.333110   13060 logs.go:123] Gathering logs for coredns [33e114d3b8f3] ...
	I1007 05:32:23.333120   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e114d3b8f3"
	I1007 05:32:23.344229   13060 logs.go:123] Gathering logs for coredns [205a727ddd11] ...
	I1007 05:32:23.344240   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205a727ddd11"
	I1007 05:32:23.355601   13060 logs.go:123] Gathering logs for kube-scheduler [586c842835b6] ...
	I1007 05:32:23.355612   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586c842835b6"
	I1007 05:32:23.371350   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:32:23.371365   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:32:23.371392   13060 out.go:270] X Problems detected in kubelet:
	W1007 05:32:23.371398   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:32:23.371403   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:32:23.371406   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:32:23.371410   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:32:21.739804   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:32:21.740045   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:32:21.760589   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:32:21.760698   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:32:21.775465   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:32:21.775550   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:32:21.787984   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:32:21.788052   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:32:21.799231   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:32:21.799315   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:32:21.809955   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:32:21.810021   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:32:21.820126   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:32:21.820197   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:32:21.830712   13189 logs.go:282] 0 containers: []
	W1007 05:32:21.830730   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:32:21.830797   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:32:21.840931   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:32:21.840948   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:32:21.840954   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:32:21.854854   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:32:21.854868   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:32:21.872167   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:32:21.872179   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:32:21.909965   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:32:21.909976   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:32:21.924046   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:32:21.924058   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:32:21.941513   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:32:21.941527   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:32:21.952970   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:32:21.952983   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:32:21.965044   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:32:21.965059   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:32:21.969263   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:32:21.969271   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:32:22.005013   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:32:22.005051   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:32:22.051569   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:32:22.051580   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:32:22.063181   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:32:22.063192   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:32:22.078377   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:32:22.078390   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:32:22.090490   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:32:22.090502   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:32:22.102839   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:32:22.102854   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:32:22.116639   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:32:22.116654   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:32:22.128164   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:32:22.128175   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:32:24.652100   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:32:29.654332   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:32:29.654491   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:32:29.666780   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:32:29.666862   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:32:29.677707   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:32:29.677800   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:32:29.688130   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:32:29.688205   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:32:29.698618   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:32:29.698691   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:32:29.709936   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:32:29.710011   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:32:29.721185   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:32:29.721259   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:32:29.731389   13189 logs.go:282] 0 containers: []
	W1007 05:32:29.731400   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:32:29.731454   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:32:29.741647   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:32:29.741664   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:32:29.741670   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:32:29.764816   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:32:29.764825   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:32:29.776847   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:32:29.776859   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:32:29.792622   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:32:29.792632   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:32:29.804035   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:32:29.804043   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:32:29.825515   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:32:29.825523   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:32:29.840187   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:32:29.840201   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:32:29.878013   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:32:29.878028   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:32:29.895477   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:32:29.895488   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:32:29.909130   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:32:29.909139   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:32:29.920640   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:32:29.920654   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:32:29.932271   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:32:29.932282   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:32:29.936475   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:32:29.936481   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:32:29.948258   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:32:29.948269   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:32:29.963244   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:32:29.963254   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:32:29.998429   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:32:29.998439   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:32:30.013060   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:32:30.013072   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:32:33.375427   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:32:32.554531   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:32:38.376250   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:32:38.376470   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:32:38.393201   13060 logs.go:282] 1 containers: [a298c7336892]
	I1007 05:32:38.393299   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:32:38.405823   13060 logs.go:282] 1 containers: [fde5262f7106]
	I1007 05:32:38.405905   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:32:38.417396   13060 logs.go:282] 4 containers: [d57fd0f376b8 33e114d3b8f3 b87a93b50113 205a727ddd11]
	I1007 05:32:38.417471   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:32:38.427949   13060 logs.go:282] 1 containers: [586c842835b6]
	I1007 05:32:38.428025   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:32:38.438827   13060 logs.go:282] 1 containers: [6ddb8e8775db]
	I1007 05:32:38.438899   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:32:38.449412   13060 logs.go:282] 1 containers: [081f9bc5b473]
	I1007 05:32:38.449473   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:32:38.460001   13060 logs.go:282] 0 containers: []
	W1007 05:32:38.460014   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:32:38.460079   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:32:38.470501   13060 logs.go:282] 1 containers: [9c51a5346c6b]
	I1007 05:32:38.470515   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:32:38.470520   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:32:38.506443   13060 logs.go:123] Gathering logs for kube-apiserver [a298c7336892] ...
	I1007 05:32:38.506453   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a298c7336892"
	I1007 05:32:38.521041   13060 logs.go:123] Gathering logs for etcd [fde5262f7106] ...
	I1007 05:32:38.521051   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fde5262f7106"
	I1007 05:32:38.534865   13060 logs.go:123] Gathering logs for kube-scheduler [586c842835b6] ...
	I1007 05:32:38.534874   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586c842835b6"
	I1007 05:32:38.550056   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:32:38.550064   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:32:38.573858   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:32:38.573868   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:32:38.578517   13060 logs.go:123] Gathering logs for coredns [33e114d3b8f3] ...
	I1007 05:32:38.578526   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e114d3b8f3"
	I1007 05:32:38.589977   13060 logs.go:123] Gathering logs for storage-provisioner [9c51a5346c6b] ...
	I1007 05:32:38.589987   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c51a5346c6b"
	I1007 05:32:38.601581   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:32:38.601594   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:32:38.613221   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:32:38.613233   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:32:38.631898   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:32:38.631992   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:32:38.648376   13060 logs.go:123] Gathering logs for coredns [b87a93b50113] ...
	I1007 05:32:38.648384   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87a93b50113"
	I1007 05:32:38.660070   13060 logs.go:123] Gathering logs for kube-proxy [6ddb8e8775db] ...
	I1007 05:32:38.660082   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddb8e8775db"
	I1007 05:32:38.671585   13060 logs.go:123] Gathering logs for kube-controller-manager [081f9bc5b473] ...
	I1007 05:32:38.671597   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081f9bc5b473"
	I1007 05:32:38.689510   13060 logs.go:123] Gathering logs for coredns [d57fd0f376b8] ...
	I1007 05:32:38.689521   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57fd0f376b8"
	I1007 05:32:38.701105   13060 logs.go:123] Gathering logs for coredns [205a727ddd11] ...
	I1007 05:32:38.701116   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205a727ddd11"
	I1007 05:32:38.716921   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:32:38.716931   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:32:38.716956   13060 out.go:270] X Problems detected in kubelet:
	W1007 05:32:38.716960   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:32:38.716973   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:32:38.716977   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:32:38.716979   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:32:37.556927   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:32:37.557505   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:32:37.597180   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:32:37.597335   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:32:37.619658   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:32:37.619795   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:32:37.636841   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:32:37.636927   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:32:37.650273   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:32:37.650353   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:32:37.664314   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:32:37.664393   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:32:37.676867   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:32:37.676954   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:32:37.692057   13189 logs.go:282] 0 containers: []
	W1007 05:32:37.692073   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:32:37.692134   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:32:37.702521   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:32:37.702539   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:32:37.702545   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:32:37.716495   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:32:37.716507   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:32:37.734615   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:32:37.734625   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:32:37.748674   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:32:37.748684   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:32:37.761117   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:32:37.761134   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:32:37.765397   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:32:37.765405   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:32:37.800764   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:32:37.800774   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:32:37.812324   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:32:37.812333   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:32:37.835771   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:32:37.835786   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:32:37.875716   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:32:37.875742   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:32:37.914591   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:32:37.914605   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:32:37.925645   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:32:37.925660   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:32:37.937725   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:32:37.937738   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:32:37.952850   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:32:37.952859   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:32:37.967131   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:32:37.967147   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:32:37.981893   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:32:37.981907   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:32:37.993821   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:32:37.993832   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:32:40.507221   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:32:45.509011   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:32:45.509502   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:32:45.540714   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:32:45.540860   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:32:45.560026   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:32:45.560153   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:32:45.575154   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:32:45.575229   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:32:48.719590   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:32:45.590692   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:32:45.590776   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:32:45.601664   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:32:45.601740   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:32:45.612653   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:32:45.612732   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:32:45.622648   13189 logs.go:282] 0 containers: []
	W1007 05:32:45.622660   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:32:45.622722   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:32:45.633506   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:32:45.633524   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:32:45.633530   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:32:45.650002   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:32:45.650016   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:32:45.665876   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:32:45.665887   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:32:45.679639   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:32:45.679655   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:32:45.694319   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:32:45.694328   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:32:45.711018   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:32:45.711031   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:32:45.722995   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:32:45.723005   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:32:45.737474   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:32:45.737482   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:32:45.749413   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:32:45.749430   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:32:45.788323   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:32:45.788331   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:32:45.822598   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:32:45.822614   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:32:45.861032   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:32:45.861043   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:32:45.873170   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:32:45.873181   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:32:45.895085   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:32:45.895094   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:32:45.899639   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:32:45.899646   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:32:45.916643   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:32:45.916653   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:32:45.929343   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:32:45.929354   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:32:48.445290   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:32:53.721867   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:32:53.721953   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:32:53.733108   13060 logs.go:282] 1 containers: [a298c7336892]
	I1007 05:32:53.733188   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:32:53.745114   13060 logs.go:282] 1 containers: [fde5262f7106]
	I1007 05:32:53.745203   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:32:53.757468   13060 logs.go:282] 4 containers: [d57fd0f376b8 33e114d3b8f3 b87a93b50113 205a727ddd11]
	I1007 05:32:53.757553   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:32:53.773012   13060 logs.go:282] 1 containers: [586c842835b6]
	I1007 05:32:53.773097   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:32:53.786808   13060 logs.go:282] 1 containers: [6ddb8e8775db]
	I1007 05:32:53.786891   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:32:53.799065   13060 logs.go:282] 1 containers: [081f9bc5b473]
	I1007 05:32:53.799148   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:32:53.810474   13060 logs.go:282] 0 containers: []
	W1007 05:32:53.810486   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:32:53.810555   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:32:53.821799   13060 logs.go:282] 1 containers: [9c51a5346c6b]
	I1007 05:32:53.821845   13060 logs.go:123] Gathering logs for etcd [fde5262f7106] ...
	I1007 05:32:53.821852   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fde5262f7106"
	I1007 05:32:53.837848   13060 logs.go:123] Gathering logs for kube-apiserver [a298c7336892] ...
	I1007 05:32:53.837863   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a298c7336892"
	I1007 05:32:53.853636   13060 logs.go:123] Gathering logs for coredns [b87a93b50113] ...
	I1007 05:32:53.853651   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87a93b50113"
	I1007 05:32:53.866270   13060 logs.go:123] Gathering logs for kube-scheduler [586c842835b6] ...
	I1007 05:32:53.866283   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586c842835b6"
	I1007 05:32:53.882359   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:32:53.882374   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:32:53.903056   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:32:53.903150   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:32:53.919118   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:32:53.919125   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:32:53.923558   13060 logs.go:123] Gathering logs for coredns [d57fd0f376b8] ...
	I1007 05:32:53.923567   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57fd0f376b8"
	I1007 05:32:53.938049   13060 logs.go:123] Gathering logs for coredns [33e114d3b8f3] ...
	I1007 05:32:53.938060   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e114d3b8f3"
	I1007 05:32:53.968383   13060 logs.go:123] Gathering logs for kube-proxy [6ddb8e8775db] ...
	I1007 05:32:53.968393   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddb8e8775db"
	I1007 05:32:53.986778   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:32:53.986792   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:32:54.011835   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:32:54.011845   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:32:54.023738   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:32:54.023749   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:32:54.058101   13060 logs.go:123] Gathering logs for coredns [205a727ddd11] ...
	I1007 05:32:54.058116   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205a727ddd11"
	I1007 05:32:54.069704   13060 logs.go:123] Gathering logs for kube-controller-manager [081f9bc5b473] ...
	I1007 05:32:54.069715   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081f9bc5b473"
	I1007 05:32:54.094156   13060 logs.go:123] Gathering logs for storage-provisioner [9c51a5346c6b] ...
	I1007 05:32:54.094172   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c51a5346c6b"
	I1007 05:32:54.106056   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:32:54.106071   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:32:54.106102   13060 out.go:270] X Problems detected in kubelet:
	W1007 05:32:54.106108   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:32:54.106112   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:32:54.106128   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:32:54.106133   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:32:53.447512   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:32:53.447873   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:32:53.480539   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:32:53.480668   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:32:53.498229   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:32:53.498333   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:32:53.517077   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:32:53.517160   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:32:53.528575   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:32:53.528652   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:32:53.539464   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:32:53.539535   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:32:53.550699   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:32:53.550779   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:32:53.561340   13189 logs.go:282] 0 containers: []
	W1007 05:32:53.561352   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:32:53.561420   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:32:53.572361   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:32:53.572379   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:32:53.572384   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:32:53.609724   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:32:53.609732   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:32:53.623515   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:32:53.623525   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:32:53.635184   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:32:53.635215   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:32:53.654280   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:32:53.654291   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:32:53.688602   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:32:53.688619   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:32:53.702826   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:32:53.702842   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:32:53.717547   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:32:53.717559   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:32:53.734634   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:32:53.734643   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:32:53.786848   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:32:53.786859   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:32:53.803489   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:32:53.803501   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:32:53.818982   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:32:53.818993   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:32:53.832489   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:32:53.832501   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:32:53.845223   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:32:53.845239   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:32:53.849930   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:32:53.849944   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:32:53.862958   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:32:53.862970   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:32:53.877910   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:32:53.877922   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:32:56.404613   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:04.110127   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:01.406868   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:01.407165   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:33:01.436015   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:33:01.436152   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:33:01.451632   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:33:01.451738   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:33:01.464707   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:33:01.464789   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:33:01.475601   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:33:01.475680   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:33:01.486418   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:33:01.486497   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:33:01.497794   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:33:01.497865   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:33:01.507784   13189 logs.go:282] 0 containers: []
	W1007 05:33:01.507794   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:33:01.507854   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:33:01.522053   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:33:01.522070   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:33:01.522076   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:33:01.536576   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:33:01.536587   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:33:01.550510   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:33:01.550521   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:33:01.561806   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:33:01.561816   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:33:01.596723   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:33:01.596735   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:33:01.632863   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:33:01.632874   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:33:01.644217   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:33:01.644232   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:33:01.655431   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:33:01.655444   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:33:01.659734   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:33:01.659743   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:33:01.674269   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:33:01.674281   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:33:01.697709   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:33:01.697719   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:33:01.737026   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:33:01.737038   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:33:01.754708   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:33:01.754720   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:33:01.769524   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:33:01.769535   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:33:01.783476   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:33:01.783486   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:33:01.795644   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:33:01.795658   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:33:01.809838   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:33:01.809847   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:33:04.323770   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:09.110886   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:09.111052   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:33:09.124454   13060 logs.go:282] 1 containers: [a298c7336892]
	I1007 05:33:09.124546   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:33:09.138785   13060 logs.go:282] 1 containers: [fde5262f7106]
	I1007 05:33:09.138869   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:33:09.149207   13060 logs.go:282] 4 containers: [d57fd0f376b8 33e114d3b8f3 b87a93b50113 205a727ddd11]
	I1007 05:33:09.149294   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:33:09.160502   13060 logs.go:282] 1 containers: [586c842835b6]
	I1007 05:33:09.160579   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:33:09.170994   13060 logs.go:282] 1 containers: [6ddb8e8775db]
	I1007 05:33:09.171075   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:33:09.181594   13060 logs.go:282] 1 containers: [081f9bc5b473]
	I1007 05:33:09.181671   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:33:09.191808   13060 logs.go:282] 0 containers: []
	W1007 05:33:09.191819   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:33:09.191881   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:33:09.202695   13060 logs.go:282] 1 containers: [9c51a5346c6b]
	I1007 05:33:09.202714   13060 logs.go:123] Gathering logs for kube-apiserver [a298c7336892] ...
	I1007 05:33:09.202719   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a298c7336892"
	I1007 05:33:09.217059   13060 logs.go:123] Gathering logs for coredns [33e114d3b8f3] ...
	I1007 05:33:09.217069   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e114d3b8f3"
	I1007 05:33:09.228706   13060 logs.go:123] Gathering logs for coredns [205a727ddd11] ...
	I1007 05:33:09.228716   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205a727ddd11"
	I1007 05:33:09.240656   13060 logs.go:123] Gathering logs for kube-controller-manager [081f9bc5b473] ...
	I1007 05:33:09.240669   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081f9bc5b473"
	I1007 05:33:09.259648   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:33:09.259657   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:33:09.271836   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:33:09.271848   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:33:09.276815   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:33:09.276822   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:33:09.313309   13060 logs.go:123] Gathering logs for kube-proxy [6ddb8e8775db] ...
	I1007 05:33:09.313319   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddb8e8775db"
	I1007 05:33:09.324792   13060 logs.go:123] Gathering logs for coredns [d57fd0f376b8] ...
	I1007 05:33:09.324807   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57fd0f376b8"
	I1007 05:33:09.337789   13060 logs.go:123] Gathering logs for kube-scheduler [586c842835b6] ...
	I1007 05:33:09.337799   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586c842835b6"
	I1007 05:33:09.353511   13060 logs.go:123] Gathering logs for coredns [b87a93b50113] ...
	I1007 05:33:09.353524   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87a93b50113"
	I1007 05:33:09.366113   13060 logs.go:123] Gathering logs for storage-provisioner [9c51a5346c6b] ...
	I1007 05:33:09.366127   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c51a5346c6b"
	I1007 05:33:09.378946   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:33:09.378960   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:33:09.407657   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:33:09.407675   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:33:09.428351   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:33:09.428449   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:33:09.444808   13060 logs.go:123] Gathering logs for etcd [fde5262f7106] ...
	I1007 05:33:09.444826   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fde5262f7106"
	I1007 05:33:09.459079   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:33:09.459088   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:33:09.459116   13060 out.go:270] X Problems detected in kubelet:
	W1007 05:33:09.459120   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:33:09.459123   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:33:09.459126   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:33:09.459130   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:33:09.324010   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:09.324090   13189 kubeadm.go:597] duration metric: took 4m4.061115041s to restartPrimaryControlPlane
	W1007 05:33:09.324130   13189 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 05:33:09.324144   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1007 05:33:10.362061   13189 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.037925s)
	I1007 05:33:10.362151   13189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 05:33:10.367102   13189 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 05:33:10.370309   13189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 05:33:10.373042   13189 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 05:33:10.373047   13189 kubeadm.go:157] found existing configuration files:
	
	I1007 05:33:10.373071   13189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/admin.conf
	I1007 05:33:10.375499   13189 kubeadm.go:163] "https://control-plane.minikube.internal:52462" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 05:33:10.375532   13189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 05:33:10.378467   13189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/kubelet.conf
	I1007 05:33:10.381628   13189 kubeadm.go:163] "https://control-plane.minikube.internal:52462" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 05:33:10.381658   13189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 05:33:10.384457   13189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/controller-manager.conf
	I1007 05:33:10.386952   13189 kubeadm.go:163] "https://control-plane.minikube.internal:52462" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 05:33:10.386985   13189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 05:33:10.390223   13189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/scheduler.conf
	I1007 05:33:10.393381   13189 kubeadm.go:163] "https://control-plane.minikube.internal:52462" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 05:33:10.393410   13189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 05:33:10.395974   13189 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 05:33:10.412067   13189 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1007 05:33:10.412097   13189 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 05:33:10.461623   13189 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 05:33:10.461681   13189 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 05:33:10.461727   13189 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 05:33:10.510685   13189 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 05:33:10.517883   13189 out.go:235]   - Generating certificates and keys ...
	I1007 05:33:10.517920   13189 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 05:33:10.517950   13189 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 05:33:10.517986   13189 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 05:33:10.518016   13189 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 05:33:10.518060   13189 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 05:33:10.518095   13189 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 05:33:10.518130   13189 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 05:33:10.518163   13189 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 05:33:10.518213   13189 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 05:33:10.518267   13189 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 05:33:10.518286   13189 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 05:33:10.518318   13189 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 05:33:10.651005   13189 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 05:33:10.738099   13189 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 05:33:10.834311   13189 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 05:33:10.882995   13189 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 05:33:10.913761   13189 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 05:33:10.914368   13189 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 05:33:10.914572   13189 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 05:33:11.004910   13189 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 05:33:11.008886   13189 out.go:235]   - Booting up control plane ...
	I1007 05:33:11.008935   13189 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 05:33:11.008985   13189 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 05:33:11.009028   13189 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 05:33:11.009087   13189 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 05:33:11.009195   13189 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 05:33:15.507186   13189 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.500866 seconds
	I1007 05:33:15.507277   13189 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 05:33:15.510753   13189 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 05:33:16.031658   13189 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 05:33:16.031945   13189 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-431000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 05:33:16.539670   13189 kubeadm.go:310] [bootstrap-token] Using token: 1669s0.m3g28gg0e6g0bg5g
	I1007 05:33:16.545737   13189 out.go:235]   - Configuring RBAC rules ...
	I1007 05:33:16.545799   13189 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 05:33:16.550648   13189 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 05:33:16.552542   13189 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 05:33:16.553423   13189 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 05:33:16.554257   13189 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 05:33:16.555292   13189 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 05:33:16.558243   13189 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 05:33:16.712950   13189 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 05:33:16.957140   13189 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 05:33:16.957649   13189 kubeadm.go:310] 
	I1007 05:33:16.957685   13189 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 05:33:16.957688   13189 kubeadm.go:310] 
	I1007 05:33:16.957735   13189 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 05:33:16.957740   13189 kubeadm.go:310] 
	I1007 05:33:16.957755   13189 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 05:33:16.957785   13189 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 05:33:16.957810   13189 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 05:33:16.957812   13189 kubeadm.go:310] 
	I1007 05:33:16.957848   13189 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 05:33:16.957851   13189 kubeadm.go:310] 
	I1007 05:33:16.957904   13189 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 05:33:16.957909   13189 kubeadm.go:310] 
	I1007 05:33:16.957936   13189 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 05:33:16.957985   13189 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 05:33:16.958036   13189 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 05:33:16.958040   13189 kubeadm.go:310] 
	I1007 05:33:16.958096   13189 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 05:33:16.958154   13189 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 05:33:16.958159   13189 kubeadm.go:310] 
	I1007 05:33:16.958199   13189 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1669s0.m3g28gg0e6g0bg5g \
	I1007 05:33:16.958264   13189 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a062c1d11feacd55c1665e5cde1180fa46a0cb1088d7ea40ca5bcc8cf3f8fe8c \
	I1007 05:33:16.958276   13189 kubeadm.go:310] 	--control-plane 
	I1007 05:33:16.958281   13189 kubeadm.go:310] 
	I1007 05:33:16.958327   13189 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 05:33:16.958330   13189 kubeadm.go:310] 
	I1007 05:33:16.958373   13189 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1669s0.m3g28gg0e6g0bg5g \
	I1007 05:33:16.958428   13189 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a062c1d11feacd55c1665e5cde1180fa46a0cb1088d7ea40ca5bcc8cf3f8fe8c 
	I1007 05:33:16.958494   13189 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 05:33:16.958554   13189 cni.go:84] Creating CNI manager for ""
	I1007 05:33:16.958563   13189 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:33:16.962678   13189 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 05:33:16.969793   13189 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 05:33:16.972893   13189 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 05:33:16.977555   13189 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 05:33:16.977612   13189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 05:33:16.977628   13189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-431000 minikube.k8s.io/updated_at=2024_10_07T05_33_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=stopped-upgrade-431000 minikube.k8s.io/primary=true
	I1007 05:33:17.016613   13189 ops.go:34] apiserver oom_adj: -16
	I1007 05:33:17.016676   13189 kubeadm.go:1113] duration metric: took 39.107333ms to wait for elevateKubeSystemPrivileges
	I1007 05:33:17.016688   13189 kubeadm.go:394] duration metric: took 4m11.767384375s to StartCluster
	I1007 05:33:17.016698   13189 settings.go:142] acquiring lock: {Name:mk5a4e22b238c18e7ccc84c412018fc85088176f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:33:17.016801   13189 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:33:17.017261   13189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/kubeconfig: {Name:mkfa460adb077498749c83f32a682247504db19f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:33:17.017465   13189 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:33:17.017470   13189 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 05:33:17.017510   13189 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-431000"
	I1007 05:33:17.017518   13189 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-431000"
	W1007 05:33:17.017521   13189 addons.go:243] addon storage-provisioner should already be in state true
	I1007 05:33:17.017520   13189 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-431000"
	I1007 05:33:17.017534   13189 host.go:66] Checking if "stopped-upgrade-431000" exists ...
	I1007 05:33:17.017538   13189 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-431000"
	I1007 05:33:17.017565   13189 config.go:182] Loaded profile config "stopped-upgrade-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:33:17.017957   13189 retry.go:31] will retry after 1.169099476s: connect: dial unix /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/stopped-upgrade-431000/monitor: connect: connection refused
	I1007 05:33:17.018655   13189 kapi.go:59] client config for stopped-upgrade-431000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/18424-10771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104d33ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 05:33:17.018816   13189 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-431000"
	W1007 05:33:17.018821   13189 addons.go:243] addon default-storageclass should already be in state true
	I1007 05:33:17.018827   13189 host.go:66] Checking if "stopped-upgrade-431000" exists ...
	I1007 05:33:17.019339   13189 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 05:33:17.019344   13189 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 05:33:17.019349   13189 sshutil.go:53] new ssh client: &{IP:localhost Port:52428 SSHKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/stopped-upgrade-431000/id_rsa Username:docker}
	I1007 05:33:17.021646   13189 out.go:177] * Verifying Kubernetes components...
	I1007 05:33:17.029724   13189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:33:17.123543   13189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 05:33:17.128760   13189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 05:33:17.130838   13189 api_server.go:52] waiting for apiserver process to appear ...
	I1007 05:33:17.130884   13189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 05:33:17.436442   13189 api_server.go:72] duration metric: took 418.971ms to wait for apiserver process to appear ...
	I1007 05:33:17.436456   13189 api_server.go:88] waiting for apiserver healthz status ...
	I1007 05:33:17.436466   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:17.436572   13189 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 05:33:17.436580   13189 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 05:33:18.193786   13189 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:33:19.461116   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:18.197801   13189 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 05:33:18.197808   13189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 05:33:18.197815   13189 sshutil.go:53] new ssh client: &{IP:localhost Port:52428 SSHKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/stopped-upgrade-431000/id_rsa Username:docker}
	I1007 05:33:18.235004   13189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 05:33:24.463328   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:24.463493   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:33:24.475137   13060 logs.go:282] 1 containers: [a298c7336892]
	I1007 05:33:24.475226   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:33:22.438470   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:22.438510   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:24.486262   13060 logs.go:282] 1 containers: [fde5262f7106]
	I1007 05:33:24.486331   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:33:24.500605   13060 logs.go:282] 4 containers: [d57fd0f376b8 33e114d3b8f3 b87a93b50113 205a727ddd11]
	I1007 05:33:24.500691   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:33:24.521017   13060 logs.go:282] 1 containers: [586c842835b6]
	I1007 05:33:24.521097   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:33:24.531274   13060 logs.go:282] 1 containers: [6ddb8e8775db]
	I1007 05:33:24.531344   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:33:24.541587   13060 logs.go:282] 1 containers: [081f9bc5b473]
	I1007 05:33:24.541665   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:33:24.551795   13060 logs.go:282] 0 containers: []
	W1007 05:33:24.551807   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:33:24.551876   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:33:24.562420   13060 logs.go:282] 1 containers: [9c51a5346c6b]
	I1007 05:33:24.562436   13060 logs.go:123] Gathering logs for coredns [d57fd0f376b8] ...
	I1007 05:33:24.562441   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57fd0f376b8"
	I1007 05:33:24.573977   13060 logs.go:123] Gathering logs for coredns [205a727ddd11] ...
	I1007 05:33:24.573989   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205a727ddd11"
	I1007 05:33:24.586009   13060 logs.go:123] Gathering logs for kube-proxy [6ddb8e8775db] ...
	I1007 05:33:24.586021   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddb8e8775db"
	I1007 05:33:24.597514   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:33:24.597524   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:33:24.609693   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:33:24.609703   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:33:24.645362   13060 logs.go:123] Gathering logs for kube-apiserver [a298c7336892] ...
	I1007 05:33:24.645376   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a298c7336892"
	I1007 05:33:24.659593   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:33:24.659602   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:33:24.683929   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:33:24.683939   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:33:24.688551   13060 logs.go:123] Gathering logs for etcd [fde5262f7106] ...
	I1007 05:33:24.688560   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fde5262f7106"
	I1007 05:33:24.702803   13060 logs.go:123] Gathering logs for coredns [33e114d3b8f3] ...
	I1007 05:33:24.702815   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e114d3b8f3"
	I1007 05:33:24.718682   13060 logs.go:123] Gathering logs for storage-provisioner [9c51a5346c6b] ...
	I1007 05:33:24.718692   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c51a5346c6b"
	I1007 05:33:24.730126   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:33:24.730134   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:33:24.748059   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:33:24.748152   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:33:24.763845   13060 logs.go:123] Gathering logs for coredns [b87a93b50113] ...
	I1007 05:33:24.763851   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87a93b50113"
	I1007 05:33:24.779841   13060 logs.go:123] Gathering logs for kube-scheduler [586c842835b6] ...
	I1007 05:33:24.779852   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586c842835b6"
	I1007 05:33:24.795234   13060 logs.go:123] Gathering logs for kube-controller-manager [081f9bc5b473] ...
	I1007 05:33:24.795246   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081f9bc5b473"
	I1007 05:33:24.813363   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:33:24.813373   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:33:24.813403   13060 out.go:270] X Problems detected in kubelet:
	W1007 05:33:24.813407   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:33:24.813411   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:33:24.813414   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:33:24.813418   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:33:27.438776   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:27.438820   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:32.439168   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:32.439206   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:34.817317   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:37.439667   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:37.439713   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:39.818099   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:39.818313   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:33:39.838681   13060 logs.go:282] 1 containers: [a298c7336892]
	I1007 05:33:39.838791   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:33:39.853124   13060 logs.go:282] 1 containers: [fde5262f7106]
	I1007 05:33:39.853213   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:33:39.865888   13060 logs.go:282] 4 containers: [d57fd0f376b8 33e114d3b8f3 b87a93b50113 205a727ddd11]
	I1007 05:33:39.865978   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:33:39.879867   13060 logs.go:282] 1 containers: [586c842835b6]
	I1007 05:33:39.879948   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:33:39.890603   13060 logs.go:282] 1 containers: [6ddb8e8775db]
	I1007 05:33:39.890702   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:33:39.904140   13060 logs.go:282] 1 containers: [081f9bc5b473]
	I1007 05:33:39.904206   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:33:39.916172   13060 logs.go:282] 0 containers: []
	W1007 05:33:39.916185   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:33:39.916252   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:33:39.927106   13060 logs.go:282] 1 containers: [9c51a5346c6b]
	I1007 05:33:39.927126   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:33:39.927132   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:33:39.963543   13060 logs.go:123] Gathering logs for coredns [d57fd0f376b8] ...
	I1007 05:33:39.963554   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57fd0f376b8"
	I1007 05:33:39.975698   13060 logs.go:123] Gathering logs for coredns [33e114d3b8f3] ...
	I1007 05:33:39.975708   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e114d3b8f3"
	I1007 05:33:39.991335   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:33:39.991346   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:33:40.011063   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:33:40.011157   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:33:40.027114   13060 logs.go:123] Gathering logs for kube-scheduler [586c842835b6] ...
	I1007 05:33:40.027121   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586c842835b6"
	I1007 05:33:40.042601   13060 logs.go:123] Gathering logs for kube-controller-manager [081f9bc5b473] ...
	I1007 05:33:40.042611   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081f9bc5b473"
	I1007 05:33:40.064365   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:33:40.064376   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:33:40.068944   13060 logs.go:123] Gathering logs for coredns [205a727ddd11] ...
	I1007 05:33:40.068951   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205a727ddd11"
	I1007 05:33:40.080876   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:33:40.080890   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:33:40.106047   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:33:40.106056   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:33:40.117390   13060 logs.go:123] Gathering logs for kube-apiserver [a298c7336892] ...
	I1007 05:33:40.117401   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a298c7336892"
	I1007 05:33:40.136304   13060 logs.go:123] Gathering logs for coredns [b87a93b50113] ...
	I1007 05:33:40.136319   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87a93b50113"
	I1007 05:33:40.148063   13060 logs.go:123] Gathering logs for kube-proxy [6ddb8e8775db] ...
	I1007 05:33:40.148076   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddb8e8775db"
	I1007 05:33:40.159659   13060 logs.go:123] Gathering logs for storage-provisioner [9c51a5346c6b] ...
	I1007 05:33:40.159672   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c51a5346c6b"
	I1007 05:33:40.179538   13060 logs.go:123] Gathering logs for etcd [fde5262f7106] ...
	I1007 05:33:40.179553   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fde5262f7106"
	I1007 05:33:40.193473   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:33:40.193486   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:33:40.193509   13060 out.go:270] X Problems detected in kubelet:
	W1007 05:33:40.193514   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:33:40.193517   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:33:40.193520   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:33:40.193522   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:33:42.440358   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:42.440387   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1007 05:33:47.438126   13189 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1007 05:33:47.441087   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:47.441102   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:47.442447   13189 out.go:177] * Enabled addons: storage-provisioner
	I1007 05:33:47.450315   13189 addons.go:510] duration metric: took 30.433406875s for enable addons: enabled=[storage-provisioner]
	I1007 05:33:50.197480   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:52.442040   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:52.442096   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:55.199758   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:55.199948   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:33:55.210721   13060 logs.go:282] 1 containers: [a298c7336892]
	I1007 05:33:55.210801   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:33:55.221752   13060 logs.go:282] 1 containers: [fde5262f7106]
	I1007 05:33:55.221834   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:33:55.238546   13060 logs.go:282] 4 containers: [d57fd0f376b8 33e114d3b8f3 b87a93b50113 205a727ddd11]
	I1007 05:33:55.238637   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:33:55.250942   13060 logs.go:282] 1 containers: [586c842835b6]
	I1007 05:33:55.251019   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:33:55.261831   13060 logs.go:282] 1 containers: [6ddb8e8775db]
	I1007 05:33:55.261913   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:33:55.272893   13060 logs.go:282] 1 containers: [081f9bc5b473]
	I1007 05:33:55.272972   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:33:55.283621   13060 logs.go:282] 0 containers: []
	W1007 05:33:55.283632   13060 logs.go:284] No container was found matching "kindnet"
	I1007 05:33:55.283693   13060 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:33:55.298262   13060 logs.go:282] 1 containers: [9c51a5346c6b]
	I1007 05:33:55.298278   13060 logs.go:123] Gathering logs for dmesg ...
	I1007 05:33:55.298283   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:33:55.302935   13060 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:33:55.302941   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:33:55.352629   13060 logs.go:123] Gathering logs for etcd [fde5262f7106] ...
	I1007 05:33:55.352643   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fde5262f7106"
	I1007 05:33:55.367518   13060 logs.go:123] Gathering logs for coredns [33e114d3b8f3] ...
	I1007 05:33:55.367528   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 33e114d3b8f3"
	I1007 05:33:55.379438   13060 logs.go:123] Gathering logs for coredns [b87a93b50113] ...
	I1007 05:33:55.379448   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b87a93b50113"
	I1007 05:33:55.392536   13060 logs.go:123] Gathering logs for kube-apiserver [a298c7336892] ...
	I1007 05:33:55.392545   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a298c7336892"
	I1007 05:33:55.406711   13060 logs.go:123] Gathering logs for coredns [205a727ddd11] ...
	I1007 05:33:55.406723   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 205a727ddd11"
	I1007 05:33:55.419416   13060 logs.go:123] Gathering logs for kube-proxy [6ddb8e8775db] ...
	I1007 05:33:55.419427   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ddb8e8775db"
	I1007 05:33:55.435066   13060 logs.go:123] Gathering logs for kubelet ...
	I1007 05:33:55.435077   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 05:33:55.454031   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:33:55.454122   13060 logs.go:138] Found kubelet problem: Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:33:55.470369   13060 logs.go:123] Gathering logs for coredns [d57fd0f376b8] ...
	I1007 05:33:55.470377   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d57fd0f376b8"
	I1007 05:33:55.481952   13060 logs.go:123] Gathering logs for kube-scheduler [586c842835b6] ...
	I1007 05:33:55.481963   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 586c842835b6"
	I1007 05:33:55.496963   13060 logs.go:123] Gathering logs for storage-provisioner [9c51a5346c6b] ...
	I1007 05:33:55.496977   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c51a5346c6b"
	I1007 05:33:55.508852   13060 logs.go:123] Gathering logs for container status ...
	I1007 05:33:55.508863   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:33:55.520407   13060 logs.go:123] Gathering logs for kube-controller-manager [081f9bc5b473] ...
	I1007 05:33:55.520420   13060 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 081f9bc5b473"
	I1007 05:33:55.537744   13060 logs.go:123] Gathering logs for Docker ...
	I1007 05:33:55.537754   13060 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:33:55.560180   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:33:55.560188   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 05:33:55.560215   13060 out.go:270] X Problems detected in kubelet:
	W1007 05:33:55.560219   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: W1007 12:26:09.786554    3963 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	W1007 05:33:55.560222   13060 out.go:270]   Oct 07 12:26:09 running-upgrade-494000 kubelet[3963]: E1007 12:26:09.786608    3963 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:running-upgrade-494000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-494000' and this object
	I1007 05:33:55.560229   13060 out.go:358] Setting ErrFile to fd 2...
	I1007 05:33:55.560232   13060 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:33:57.443672   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:57.443707   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:34:02.445297   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:34:02.445352   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:34:05.564044   13060 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:34:10.566279   13060 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:34:10.571796   13060 out.go:201] 
	W1007 05:34:10.575860   13060 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1007 05:34:10.575878   13060 out.go:270] * 
	W1007 05:34:10.576784   13060 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:34:10.585802   13060 out.go:201] 
	I1007 05:34:07.447442   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:34:07.447469   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:34:12.448794   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:34:12.448817   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:34:17.450920   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:34:17.451101   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:34:17.462958   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:34:17.463041   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:34:17.473534   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:34:17.473616   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:34:17.484454   13189 logs.go:282] 2 containers: [4b50b80d34ff 8906704c7223]
	I1007 05:34:17.484532   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:34:17.503284   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:34:17.503356   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:34:17.516062   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:34:17.516140   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:34:17.526682   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:34:17.526760   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:34:17.537107   13189 logs.go:282] 0 containers: []
	W1007 05:34:17.537117   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:34:17.537183   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:34:17.547991   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:34:17.548005   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:34:17.548012   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:34:17.582790   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:34:17.582801   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:34:17.597177   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:34:17.597189   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:34:17.611373   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:34:17.611387   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:34:17.626388   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:34:17.626399   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:34:17.644553   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:34:17.644563   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:34:17.669305   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:34:17.669312   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:34:17.674091   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:34:17.674098   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:34:17.708490   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:34:17.708501   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:34:17.720548   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:34:17.720561   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:34:17.731897   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:34:17.731911   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:34:17.744332   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:34:17.744344   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:34:17.755312   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:34:17.755325   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:34:20.267232   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:34:25.267745   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:34:25.267886   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:34:25.280299   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:34:25.280386   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:34:25.292069   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:34:25.292154   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:34:25.302356   13189 logs.go:282] 2 containers: [4b50b80d34ff 8906704c7223]
	I1007 05:34:25.302425   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:34:25.312956   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:34:25.313037   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:34:25.329136   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:34:25.329208   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:34:25.338933   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:34:25.339001   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:34:25.349348   13189 logs.go:282] 0 containers: []
	W1007 05:34:25.349363   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:34:25.349431   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:34:25.360106   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:34:25.360120   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:34:25.360126   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:34:25.365188   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:34:25.365193   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:34:25.379648   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:34:25.379658   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:34:25.404187   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:34:25.404197   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:34:25.419099   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:34:25.419115   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:34:25.431337   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:34:25.431347   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:34:25.449426   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:34:25.449436   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:34:25.485870   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:34:25.485879   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:34:25.520132   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:34:25.520144   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:34:25.534206   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:34:25.534217   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:34:25.545630   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:34:25.545642   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:34:25.558321   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:34:25.558332   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:34:25.569636   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:34:25.569647   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-10-07 12:25:11 UTC, ends at Mon 2024-10-07 12:34:26 UTC. --
	Oct 07 12:34:07 running-upgrade-494000 dockerd[3217]: time="2024-10-07T12:34:07.183827359Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/9fc701c3b8689c53e47c89ef25041d3ad8e017655dade87c16a688dc8b39d011 pid=16070 runtime=io.containerd.runc.v2
	Oct 07 12:34:07 running-upgrade-494000 cri-dockerd[3058]: time="2024-10-07T12:34:07Z" level=error msg="ContainerStats resp: {0x40004a02c0 linux}"
	Oct 07 12:34:07 running-upgrade-494000 cri-dockerd[3058]: time="2024-10-07T12:34:07Z" level=error msg="ContainerStats resp: {0x400007fec0 linux}"
	Oct 07 12:34:08 running-upgrade-494000 cri-dockerd[3058]: time="2024-10-07T12:34:08Z" level=error msg="ContainerStats resp: {0x40005317c0 linux}"
	Oct 07 12:34:09 running-upgrade-494000 cri-dockerd[3058]: time="2024-10-07T12:34:09Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 07 12:34:09 running-upgrade-494000 cri-dockerd[3058]: time="2024-10-07T12:34:09Z" level=error msg="ContainerStats resp: {0x4000613c40 linux}"
	Oct 07 12:34:09 running-upgrade-494000 cri-dockerd[3058]: time="2024-10-07T12:34:09Z" level=error msg="ContainerStats resp: {0x40000b8000 linux}"
	Oct 07 12:34:09 running-upgrade-494000 cri-dockerd[3058]: time="2024-10-07T12:34:09Z" level=error msg="ContainerStats resp: {0x40000b9b40 linux}"
	Oct 07 12:34:09 running-upgrade-494000 cri-dockerd[3058]: time="2024-10-07T12:34:09Z" level=error msg="ContainerStats resp: {0x4000818200 linux}"
	Oct 07 12:34:09 running-upgrade-494000 cri-dockerd[3058]: time="2024-10-07T12:34:09Z" level=error msg="ContainerStats resp: {0x4000818640 linux}"
	Oct 07 12:34:09 running-upgrade-494000 cri-dockerd[3058]: time="2024-10-07T12:34:09Z" level=error msg="ContainerStats resp: {0x40004a1300 linux}"
	Oct 07 12:34:09 running-upgrade-494000 cri-dockerd[3058]: time="2024-10-07T12:34:09Z" level=error msg="ContainerStats resp: {0x40004a1800 linux}"
	Oct 07 12:34:14 running-upgrade-494000 cri-dockerd[3058]: time="2024-10-07T12:34:14Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 07 12:34:19 running-upgrade-494000 cri-dockerd[3058]: time="2024-10-07T12:34:19Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 07 12:34:19 running-upgrade-494000 cri-dockerd[3058]: time="2024-10-07T12:34:19Z" level=error msg="ContainerStats resp: {0x4000530c00 linux}"
	Oct 07 12:34:19 running-upgrade-494000 cri-dockerd[3058]: time="2024-10-07T12:34:19Z" level=error msg="ContainerStats resp: {0x4000612900 linux}"
	Oct 07 12:34:20 running-upgrade-494000 cri-dockerd[3058]: time="2024-10-07T12:34:20Z" level=error msg="ContainerStats resp: {0x4000818800 linux}"
	Oct 07 12:34:21 running-upgrade-494000 cri-dockerd[3058]: time="2024-10-07T12:34:21Z" level=error msg="ContainerStats resp: {0x4000819280 linux}"
	Oct 07 12:34:21 running-upgrade-494000 cri-dockerd[3058]: time="2024-10-07T12:34:21Z" level=error msg="ContainerStats resp: {0x40004a1400 linux}"
	Oct 07 12:34:21 running-upgrade-494000 cri-dockerd[3058]: time="2024-10-07T12:34:21Z" level=error msg="ContainerStats resp: {0x40004a0040 linux}"
	Oct 07 12:34:21 running-upgrade-494000 cri-dockerd[3058]: time="2024-10-07T12:34:21Z" level=error msg="ContainerStats resp: {0x40008185c0 linux}"
	Oct 07 12:34:21 running-upgrade-494000 cri-dockerd[3058]: time="2024-10-07T12:34:21Z" level=error msg="ContainerStats resp: {0x4000818a80 linux}"
	Oct 07 12:34:21 running-upgrade-494000 cri-dockerd[3058]: time="2024-10-07T12:34:21Z" level=error msg="ContainerStats resp: {0x40004a0d40 linux}"
	Oct 07 12:34:21 running-upgrade-494000 cri-dockerd[3058]: time="2024-10-07T12:34:21Z" level=error msg="ContainerStats resp: {0x4000819600 linux}"
	Oct 07 12:34:24 running-upgrade-494000 cri-dockerd[3058]: time="2024-10-07T12:34:24Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	9fc701c3b8689       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   4625426a1986a
	cb0a7c83090a8       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   be52b03db956a
	d57fd0f376b8e       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   4625426a1986a
	33e114d3b8f3b       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   be52b03db956a
	6ddb8e8775db7       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   fa3aabd5d9ef7
	9c51a5346c6bf       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   fd045706085b8
	586c842835b6d       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   1f67d40f71b10
	fde5262f7106e       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   cde4ab64f3b8e
	081f9bc5b4730       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   72e9fecfd05c1
	a298c73368925       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   0be4564a04565
	
	
	==> coredns [33e114d3b8f3] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 916529738001361062.8218127447328467938. HINFO: read udp 10.244.0.3:50538->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 916529738001361062.8218127447328467938. HINFO: read udp 10.244.0.3:37456->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 916529738001361062.8218127447328467938. HINFO: read udp 10.244.0.3:47208->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 916529738001361062.8218127447328467938. HINFO: read udp 10.244.0.3:60303->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 916529738001361062.8218127447328467938. HINFO: read udp 10.244.0.3:42531->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 916529738001361062.8218127447328467938. HINFO: read udp 10.244.0.3:56372->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 916529738001361062.8218127447328467938. HINFO: read udp 10.244.0.3:58702->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 916529738001361062.8218127447328467938. HINFO: read udp 10.244.0.3:45877->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 916529738001361062.8218127447328467938. HINFO: read udp 10.244.0.3:55811->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 916529738001361062.8218127447328467938. HINFO: read udp 10.244.0.3:56223->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9fc701c3b868] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3044278600695929952.4494664199240152813. HINFO: read udp 10.244.0.2:56790->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3044278600695929952.4494664199240152813. HINFO: read udp 10.244.0.2:36633->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3044278600695929952.4494664199240152813. HINFO: read udp 10.244.0.2:57036->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3044278600695929952.4494664199240152813. HINFO: read udp 10.244.0.2:45315->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3044278600695929952.4494664199240152813. HINFO: read udp 10.244.0.2:43921->10.0.2.3:53: i/o timeout
	
	
	==> coredns [cb0a7c83090a] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7075004007742084970.4473681450086826901. HINFO: read udp 10.244.0.3:37212->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7075004007742084970.4473681450086826901. HINFO: read udp 10.244.0.3:51314->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7075004007742084970.4473681450086826901. HINFO: read udp 10.244.0.3:57580->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7075004007742084970.4473681450086826901. HINFO: read udp 10.244.0.3:37357->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7075004007742084970.4473681450086826901. HINFO: read udp 10.244.0.3:39987->10.0.2.3:53: i/o timeout
	
	
	==> coredns [d57fd0f376b8] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8551292060768359834.3938250343913144769. HINFO: read udp 10.244.0.2:46086->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8551292060768359834.3938250343913144769. HINFO: read udp 10.244.0.2:34840->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8551292060768359834.3938250343913144769. HINFO: read udp 10.244.0.2:50564->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8551292060768359834.3938250343913144769. HINFO: read udp 10.244.0.2:52769->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8551292060768359834.3938250343913144769. HINFO: read udp 10.244.0.2:52233->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8551292060768359834.3938250343913144769. HINFO: read udp 10.244.0.2:57927->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8551292060768359834.3938250343913144769. HINFO: read udp 10.244.0.2:45083->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8551292060768359834.3938250343913144769. HINFO: read udp 10.244.0.2:60018->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8551292060768359834.3938250343913144769. HINFO: read udp 10.244.0.2:49203->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8551292060768359834.3938250343913144769. HINFO: read udp 10.244.0.2:43402->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-494000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-494000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=running-upgrade-494000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T05_30_06_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:30:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-494000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 12:34:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:30:06 +0000   Mon, 07 Oct 2024 12:30:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:30:06 +0000   Mon, 07 Oct 2024 12:30:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:30:06 +0000   Mon, 07 Oct 2024 12:30:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:30:06 +0000   Mon, 07 Oct 2024 12:30:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-494000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 6940bc2aa17c45abbaf39e365fa1a29f
	  System UUID:                6940bc2aa17c45abbaf39e365fa1a29f
	  Boot ID:                    0f36b1fd-9714-42b5-b946-a6b7bf3718eb
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-nvj7z                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m7s
	  kube-system                 coredns-6d4b75cb6d-q6nft                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m7s
	  kube-system                 etcd-running-upgrade-494000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m20s
	  kube-system                 kube-apiserver-running-upgrade-494000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-controller-manager-running-upgrade-494000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-proxy-fl9vp                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-scheduler-running-upgrade-494000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m7s   kube-proxy       
	  Normal  Starting                 4m21s  kubelet          Starting kubelet.
	  Normal  NodeReady                4m20s  kubelet          Node running-upgrade-494000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m20s  kubelet          Node running-upgrade-494000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m20s  kubelet          Node running-upgrade-494000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m20s  kubelet          Node running-upgrade-494000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m8s   node-controller  Node running-upgrade-494000 event: Registered Node running-upgrade-494000 in Controller
	
	
	==> dmesg <==
	[  +1.691993] systemd-fstab-generator[876]: Ignoring "noauto" for root device
	[  +0.072910] systemd-fstab-generator[887]: Ignoring "noauto" for root device
	[  +0.078941] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +1.136704] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.095424] systemd-fstab-generator[1048]: Ignoring "noauto" for root device
	[  +0.087974] systemd-fstab-generator[1059]: Ignoring "noauto" for root device
	[  +2.491520] systemd-fstab-generator[1283]: Ignoring "noauto" for root device
	[  +9.648958] systemd-fstab-generator[1932]: Ignoring "noauto" for root device
	[  +2.365855] systemd-fstab-generator[2213]: Ignoring "noauto" for root device
	[  +0.191979] systemd-fstab-generator[2249]: Ignoring "noauto" for root device
	[  +0.095999] systemd-fstab-generator[2260]: Ignoring "noauto" for root device
	[  +0.101897] systemd-fstab-generator[2273]: Ignoring "noauto" for root device
	[  +2.853771] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.220682] systemd-fstab-generator[3014]: Ignoring "noauto" for root device
	[  +0.091362] systemd-fstab-generator[3026]: Ignoring "noauto" for root device
	[  +0.084626] systemd-fstab-generator[3037]: Ignoring "noauto" for root device
	[  +0.095698] systemd-fstab-generator[3051]: Ignoring "noauto" for root device
	[  +2.312775] systemd-fstab-generator[3203]: Ignoring "noauto" for root device
	[  +2.990868] systemd-fstab-generator[3755]: Ignoring "noauto" for root device
	[  +1.582924] systemd-fstab-generator[3957]: Ignoring "noauto" for root device
	[Oct 7 12:26] kauditd_printk_skb: 68 callbacks suppressed
	[Oct 7 12:29] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.653477] systemd-fstab-generator[10597]: Ignoring "noauto" for root device
	[Oct 7 12:30] systemd-fstab-generator[11212]: Ignoring "noauto" for root device
	[  +0.463901] systemd-fstab-generator[11343]: Ignoring "noauto" for root device
	
	
	==> etcd [fde5262f7106] <==
	{"level":"info","ts":"2024-10-07T12:30:01.071Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-07T12:30:01.079Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-10-07T12:30:01.079Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-07T12:30:01.080Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-07T12:30:01.079Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-07T12:30:01.080Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-10-07T12:30:01.080Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-10-07T12:30:01.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-07T12:30:01.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-07T12:30:01.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-10-07T12:30:01.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-10-07T12:30:01.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-07T12:30:01.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-10-07T12:30:01.948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-07T12:30:01.948Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-494000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-07T12:30:01.948Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T12:30:01.949Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T12:30:01.949Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-07T12:30:01.949Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-07T12:30:01.950Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T12:30:01.950Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T12:30:01.950Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T12:30:01.950Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T12:30:01.950Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-07T12:30:01.951Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	
	
	==> kernel <==
	 12:34:26 up 9 min,  0 users,  load average: 0.22, 0.26, 0.17
	Linux running-upgrade-494000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [a298c7336892] <==
	I1007 12:30:03.227405       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1007 12:30:03.227439       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1007 12:30:03.232753       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1007 12:30:03.232812       1 cache.go:39] Caches are synced for autoregister controller
	I1007 12:30:03.234336       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1007 12:30:03.235482       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1007 12:30:03.246081       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1007 12:30:03.956700       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1007 12:30:04.142138       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1007 12:30:04.147852       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1007 12:30:04.148036       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1007 12:30:04.279787       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1007 12:30:04.293034       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1007 12:30:04.396445       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1007 12:30:04.398899       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I1007 12:30:04.399283       1 controller.go:611] quota admission added evaluator for: endpoints
	I1007 12:30:04.400621       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1007 12:30:05.285230       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1007 12:30:05.953299       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1007 12:30:05.956362       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1007 12:30:05.971538       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1007 12:30:06.011228       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1007 12:30:18.288785       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1007 12:30:18.888565       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1007 12:30:19.703599       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [081f9bc5b473] <==
	I1007 12:30:18.142137       1 range_allocator.go:173] Starting range CIDR allocator
	I1007 12:30:18.142138       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1007 12:30:18.142141       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1007 12:30:18.144430       1 range_allocator.go:374] Set node running-upgrade-494000 PodCIDR to [10.244.0.0/24]
	I1007 12:30:18.156749       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1007 12:30:18.186139       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1007 12:30:18.187251       1 shared_informer.go:262] Caches are synced for daemon sets
	I1007 12:30:18.187259       1 shared_informer.go:262] Caches are synced for GC
	I1007 12:30:18.210429       1 shared_informer.go:262] Caches are synced for TTL
	I1007 12:30:18.210936       1 shared_informer.go:262] Caches are synced for taint
	I1007 12:30:18.210965       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W1007 12:30:18.210979       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-494000. Assuming now as a timestamp.
	I1007 12:30:18.210995       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1007 12:30:18.211038       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1007 12:30:18.211137       1 event.go:294] "Event occurred" object="running-upgrade-494000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-494000 event: Registered Node running-upgrade-494000 in Controller"
	I1007 12:30:18.212765       1 shared_informer.go:262] Caches are synced for attach detach
	I1007 12:30:18.238196       1 shared_informer.go:262] Caches are synced for persistent volume
	I1007 12:30:18.287235       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I1007 12:30:18.290192       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I1007 12:30:18.653824       1 shared_informer.go:262] Caches are synced for garbage collector
	I1007 12:30:18.686008       1 shared_informer.go:262] Caches are synced for garbage collector
	I1007 12:30:18.686020       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1007 12:30:18.897307       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-fl9vp"
	I1007 12:30:19.039831       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-nvj7z"
	I1007 12:30:19.044983       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-q6nft"
	
	
	==> kube-proxy [6ddb8e8775db] <==
	I1007 12:30:19.689794       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I1007 12:30:19.689817       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I1007 12:30:19.689826       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1007 12:30:19.701194       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1007 12:30:19.701207       1 server_others.go:206] "Using iptables Proxier"
	I1007 12:30:19.701220       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1007 12:30:19.701321       1 server.go:661] "Version info" version="v1.24.1"
	I1007 12:30:19.701418       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 12:30:19.701701       1 config.go:317] "Starting service config controller"
	I1007 12:30:19.701711       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1007 12:30:19.701803       1 config.go:226] "Starting endpoint slice config controller"
	I1007 12:30:19.701836       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1007 12:30:19.702589       1 config.go:444] "Starting node config controller"
	I1007 12:30:19.702596       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1007 12:30:19.802413       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1007 12:30:19.802457       1 shared_informer.go:262] Caches are synced for service config
	I1007 12:30:19.802665       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [586c842835b6] <==
	W1007 12:30:03.198909       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 12:30:03.198911       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1007 12:30:03.199773       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1007 12:30:03.199833       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1007 12:30:03.199837       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1007 12:30:03.199856       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 12:30:03.199859       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1007 12:30:03.199870       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1007 12:30:03.199874       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1007 12:30:03.199889       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 12:30:03.199896       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1007 12:30:03.199924       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 12:30:03.199927       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1007 12:30:03.199938       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 12:30:03.199941       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 12:30:03.199785       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1007 12:30:04.015810       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 12:30:04.016170       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1007 12:30:04.019714       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 12:30:04.019808       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1007 12:30:04.048868       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1007 12:30:04.048898       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1007 12:30:04.191131       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 12:30:04.191198       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1007 12:30:07.097603       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-10-07 12:25:11 UTC, ends at Mon 2024-10-07 12:34:27 UTC. --
	Oct 07 12:30:07 running-upgrade-494000 kubelet[11218]: E1007 12:30:07.794321   11218 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-494000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-494000"
	Oct 07 12:30:07 running-upgrade-494000 kubelet[11218]: E1007 12:30:07.986487   11218 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-494000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-494000"
	Oct 07 12:30:08 running-upgrade-494000 kubelet[11218]: I1007 12:30:08.187681   11218 request.go:601] Waited for 1.126944942s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Oct 07 12:30:08 running-upgrade-494000 kubelet[11218]: E1007 12:30:08.190742   11218 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-494000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-494000"
	Oct 07 12:30:18 running-upgrade-494000 kubelet[11218]: I1007 12:30:18.217071   11218 topology_manager.go:200] "Topology Admit Handler"
	Oct 07 12:30:18 running-upgrade-494000 kubelet[11218]: I1007 12:30:18.244524   11218 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 07 12:30:18 running-upgrade-494000 kubelet[11218]: I1007 12:30:18.245227   11218 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e98d483d-b49a-4955-9d3e-fe7dcbc404fd-tmp\") pod \"storage-provisioner\" (UID: \"e98d483d-b49a-4955-9d3e-fe7dcbc404fd\") " pod="kube-system/storage-provisioner"
	Oct 07 12:30:18 running-upgrade-494000 kubelet[11218]: I1007 12:30:18.245383   11218 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs6jp\" (UniqueName: \"kubernetes.io/projected/e98d483d-b49a-4955-9d3e-fe7dcbc404fd-kube-api-access-gs6jp\") pod \"storage-provisioner\" (UID: \"e98d483d-b49a-4955-9d3e-fe7dcbc404fd\") " pod="kube-system/storage-provisioner"
	Oct 07 12:30:18 running-upgrade-494000 kubelet[11218]: I1007 12:30:18.245845   11218 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 07 12:30:18 running-upgrade-494000 kubelet[11218]: E1007 12:30:18.348332   11218 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 07 12:30:18 running-upgrade-494000 kubelet[11218]: E1007 12:30:18.348353   11218 projected.go:192] Error preparing data for projected volume kube-api-access-gs6jp for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Oct 07 12:30:18 running-upgrade-494000 kubelet[11218]: E1007 12:30:18.348386   11218 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/e98d483d-b49a-4955-9d3e-fe7dcbc404fd-kube-api-access-gs6jp podName:e98d483d-b49a-4955-9d3e-fe7dcbc404fd nodeName:}" failed. No retries permitted until 2024-10-07 12:30:18.848373951 +0000 UTC m=+12.906439518 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gs6jp" (UniqueName: "kubernetes.io/projected/e98d483d-b49a-4955-9d3e-fe7dcbc404fd-kube-api-access-gs6jp") pod "storage-provisioner" (UID: "e98d483d-b49a-4955-9d3e-fe7dcbc404fd") : configmap "kube-root-ca.crt" not found
	Oct 07 12:30:18 running-upgrade-494000 kubelet[11218]: I1007 12:30:18.898100   11218 topology_manager.go:200] "Topology Admit Handler"
	Oct 07 12:30:18 running-upgrade-494000 kubelet[11218]: I1007 12:30:18.950460   11218 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28574a2f-895a-4267-98e5-e96a0f2cc35d-xtables-lock\") pod \"kube-proxy-fl9vp\" (UID: \"28574a2f-895a-4267-98e5-e96a0f2cc35d\") " pod="kube-system/kube-proxy-fl9vp"
	Oct 07 12:30:18 running-upgrade-494000 kubelet[11218]: I1007 12:30:18.950496   11218 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/28574a2f-895a-4267-98e5-e96a0f2cc35d-kube-proxy\") pod \"kube-proxy-fl9vp\" (UID: \"28574a2f-895a-4267-98e5-e96a0f2cc35d\") " pod="kube-system/kube-proxy-fl9vp"
	Oct 07 12:30:18 running-upgrade-494000 kubelet[11218]: I1007 12:30:18.950509   11218 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4hhm\" (UniqueName: \"kubernetes.io/projected/28574a2f-895a-4267-98e5-e96a0f2cc35d-kube-api-access-p4hhm\") pod \"kube-proxy-fl9vp\" (UID: \"28574a2f-895a-4267-98e5-e96a0f2cc35d\") " pod="kube-system/kube-proxy-fl9vp"
	Oct 07 12:30:18 running-upgrade-494000 kubelet[11218]: I1007 12:30:18.950527   11218 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28574a2f-895a-4267-98e5-e96a0f2cc35d-lib-modules\") pod \"kube-proxy-fl9vp\" (UID: \"28574a2f-895a-4267-98e5-e96a0f2cc35d\") " pod="kube-system/kube-proxy-fl9vp"
	Oct 07 12:30:19 running-upgrade-494000 kubelet[11218]: I1007 12:30:19.041479   11218 topology_manager.go:200] "Topology Admit Handler"
	Oct 07 12:30:19 running-upgrade-494000 kubelet[11218]: I1007 12:30:19.045696   11218 topology_manager.go:200] "Topology Admit Handler"
	Oct 07 12:30:19 running-upgrade-494000 kubelet[11218]: I1007 12:30:19.152135   11218 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs78v\" (UniqueName: \"kubernetes.io/projected/e72ba82c-fbd5-4763-8c7a-80131439aa7e-kube-api-access-gs78v\") pod \"coredns-6d4b75cb6d-q6nft\" (UID: \"e72ba82c-fbd5-4763-8c7a-80131439aa7e\") " pod="kube-system/coredns-6d4b75cb6d-q6nft"
	Oct 07 12:30:19 running-upgrade-494000 kubelet[11218]: I1007 12:30:19.152179   11218 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc8px\" (UniqueName: \"kubernetes.io/projected/e0a67abb-679b-4667-9a7d-9a3e354123df-kube-api-access-hc8px\") pod \"coredns-6d4b75cb6d-nvj7z\" (UID: \"e0a67abb-679b-4667-9a7d-9a3e354123df\") " pod="kube-system/coredns-6d4b75cb6d-nvj7z"
	Oct 07 12:30:19 running-upgrade-494000 kubelet[11218]: I1007 12:30:19.152195   11218 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0a67abb-679b-4667-9a7d-9a3e354123df-config-volume\") pod \"coredns-6d4b75cb6d-nvj7z\" (UID: \"e0a67abb-679b-4667-9a7d-9a3e354123df\") " pod="kube-system/coredns-6d4b75cb6d-nvj7z"
	Oct 07 12:30:19 running-upgrade-494000 kubelet[11218]: I1007 12:30:19.152214   11218 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e72ba82c-fbd5-4763-8c7a-80131439aa7e-config-volume\") pod \"coredns-6d4b75cb6d-q6nft\" (UID: \"e72ba82c-fbd5-4763-8c7a-80131439aa7e\") " pod="kube-system/coredns-6d4b75cb6d-q6nft"
	Oct 07 12:34:07 running-upgrade-494000 kubelet[11218]: I1007 12:34:07.370428   11218 scope.go:110] "RemoveContainer" containerID="b87a93b50113c20825fa49857d5e9fcad52001fddf01f484573c67948e261f18"
	Oct 07 12:34:07 running-upgrade-494000 kubelet[11218]: I1007 12:34:07.385868   11218 scope.go:110] "RemoveContainer" containerID="205a727ddd114a8a4cb82735b090f1625e6f535c1184510cebd9a6d84333d1ae"
	
	
	==> storage-provisioner [9c51a5346c6b] <==
	I1007 12:30:19.331143       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 12:30:19.336453       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 12:30:19.336534       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 12:30:19.340145       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 12:30:19.340297       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-494000_2268e5ad-dcf4-4bfb-b4c7-809233960d31!
	I1007 12:30:19.340686       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d4271904-aa98-424f-9023-43cea3ab3c9f", APIVersion:"v1", ResourceVersion:"358", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-494000_2268e5ad-dcf4-4bfb-b4c7-809233960d31 became leader
	I1007 12:30:19.440467       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-494000_2268e5ad-dcf4-4bfb-b4c7-809233960d31!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-494000 -n running-upgrade-494000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-494000 -n running-upgrade-494000: exit status 2 (15.638615584s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-494000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-494000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-494000
--- FAIL: TestRunningBinaryUpgrade (622.10s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.32s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-881000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-881000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.868892417s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-881000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-881000" primary control-plane node in "kubernetes-upgrade-881000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-881000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:27:20.897571   13118 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:27:20.897733   13118 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:27:20.897736   13118 out.go:358] Setting ErrFile to fd 2...
	I1007 05:27:20.897738   13118 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:27:20.897884   13118 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:27:20.899082   13118 out.go:352] Setting JSON to false
	I1007 05:27:20.917419   13118 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7011,"bootTime":1728297029,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:27:20.917493   13118 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:27:20.922963   13118 out.go:177] * [kubernetes-upgrade-881000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:27:20.929875   13118 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:27:20.929913   13118 notify.go:220] Checking for updates...
	I1007 05:27:20.935914   13118 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:27:20.940829   13118 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:27:20.948858   13118 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:27:20.950200   13118 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:27:20.952883   13118 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:27:20.956332   13118 config.go:182] Loaded profile config "multinode-062000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:27:20.956402   13118 config.go:182] Loaded profile config "running-upgrade-494000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:27:20.956480   13118 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:27:20.960777   13118 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:27:20.967938   13118 start.go:297] selected driver: qemu2
	I1007 05:27:20.967944   13118 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:27:20.967951   13118 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:27:20.970380   13118 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:27:20.971790   13118 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:27:20.974957   13118 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 05:27:20.974971   13118 cni.go:84] Creating CNI manager for ""
	I1007 05:27:20.974997   13118 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1007 05:27:20.975034   13118 start.go:340] cluster config:
	{Name:kubernetes-upgrade-881000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:27:20.979586   13118 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:27:20.987870   13118 out.go:177] * Starting "kubernetes-upgrade-881000" primary control-plane node in "kubernetes-upgrade-881000" cluster
	I1007 05:27:20.991904   13118 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1007 05:27:20.991920   13118 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1007 05:27:20.991930   13118 cache.go:56] Caching tarball of preloaded images
	I1007 05:27:20.992005   13118 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:27:20.992010   13118 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1007 05:27:20.992081   13118 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/kubernetes-upgrade-881000/config.json ...
	I1007 05:27:20.992097   13118 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/kubernetes-upgrade-881000/config.json: {Name:mkd1eee9dc80e29293407fc0459d17e2c8485436 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:27:20.992319   13118 start.go:360] acquireMachinesLock for kubernetes-upgrade-881000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:27:20.992362   13118 start.go:364] duration metric: took 36.583µs to acquireMachinesLock for "kubernetes-upgrade-881000"
	I1007 05:27:20.992374   13118 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:27:20.992399   13118 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:27:20.996880   13118 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:27:21.011966   13118 start.go:159] libmachine.API.Create for "kubernetes-upgrade-881000" (driver="qemu2")
	I1007 05:27:21.011992   13118 client.go:168] LocalClient.Create starting
	I1007 05:27:21.012059   13118 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:27:21.012101   13118 main.go:141] libmachine: Decoding PEM data...
	I1007 05:27:21.012110   13118 main.go:141] libmachine: Parsing certificate...
	I1007 05:27:21.012153   13118 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:27:21.012187   13118 main.go:141] libmachine: Decoding PEM data...
	I1007 05:27:21.012195   13118 main.go:141] libmachine: Parsing certificate...
	I1007 05:27:21.012565   13118 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:27:21.164914   13118 main.go:141] libmachine: Creating SSH key...
	I1007 05:27:21.274296   13118 main.go:141] libmachine: Creating Disk image...
	I1007 05:27:21.274305   13118 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:27:21.274494   13118 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/disk.qcow2
	I1007 05:27:21.284514   13118 main.go:141] libmachine: STDOUT: 
	I1007 05:27:21.284537   13118 main.go:141] libmachine: STDERR: 
	I1007 05:27:21.284599   13118 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/disk.qcow2 +20000M
	I1007 05:27:21.293550   13118 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:27:21.293568   13118 main.go:141] libmachine: STDERR: 
	I1007 05:27:21.293597   13118 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/disk.qcow2
	I1007 05:27:21.293605   13118 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:27:21.293616   13118 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:27:21.293644   13118 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:fa:d5:49:d8:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/disk.qcow2
	I1007 05:27:21.295426   13118 main.go:141] libmachine: STDOUT: 
	I1007 05:27:21.295439   13118 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:27:21.295462   13118 client.go:171] duration metric: took 283.467375ms to LocalClient.Create
	I1007 05:27:23.297722   13118 start.go:128] duration metric: took 2.30533425s to createHost
	I1007 05:27:23.297813   13118 start.go:83] releasing machines lock for "kubernetes-upgrade-881000", held for 2.305484167s
	W1007 05:27:23.297871   13118 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:27:23.310060   13118 out.go:177] * Deleting "kubernetes-upgrade-881000" in qemu2 ...
	W1007 05:27:23.335840   13118 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:27:23.335877   13118 start.go:729] Will try again in 5 seconds ...
	I1007 05:27:28.338060   13118 start.go:360] acquireMachinesLock for kubernetes-upgrade-881000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:27:28.338729   13118 start.go:364] duration metric: took 548.959µs to acquireMachinesLock for "kubernetes-upgrade-881000"
	I1007 05:27:28.338890   13118 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:27:28.339156   13118 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:27:28.347787   13118 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:27:28.397356   13118 start.go:159] libmachine.API.Create for "kubernetes-upgrade-881000" (driver="qemu2")
	I1007 05:27:28.397415   13118 client.go:168] LocalClient.Create starting
	I1007 05:27:28.397566   13118 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:27:28.397650   13118 main.go:141] libmachine: Decoding PEM data...
	I1007 05:27:28.397680   13118 main.go:141] libmachine: Parsing certificate...
	I1007 05:27:28.397741   13118 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:27:28.397802   13118 main.go:141] libmachine: Decoding PEM data...
	I1007 05:27:28.397813   13118 main.go:141] libmachine: Parsing certificate...
	I1007 05:27:28.398405   13118 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:27:28.552365   13118 main.go:141] libmachine: Creating SSH key...
	I1007 05:27:28.669824   13118 main.go:141] libmachine: Creating Disk image...
	I1007 05:27:28.669835   13118 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:27:28.670062   13118 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/disk.qcow2
	I1007 05:27:28.680341   13118 main.go:141] libmachine: STDOUT: 
	I1007 05:27:28.680442   13118 main.go:141] libmachine: STDERR: 
	I1007 05:27:28.680506   13118 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/disk.qcow2 +20000M
	I1007 05:27:28.688888   13118 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:27:28.688956   13118 main.go:141] libmachine: STDERR: 
	I1007 05:27:28.688972   13118 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/disk.qcow2
	I1007 05:27:28.688976   13118 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:27:28.688983   13118 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:27:28.689021   13118 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:18:c0:36:d2:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/disk.qcow2
	I1007 05:27:28.690893   13118 main.go:141] libmachine: STDOUT: 
	I1007 05:27:28.690956   13118 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:27:28.690968   13118 client.go:171] duration metric: took 293.550625ms to LocalClient.Create
	I1007 05:27:30.693146   13118 start.go:128] duration metric: took 2.353993125s to createHost
	I1007 05:27:30.693234   13118 start.go:83] releasing machines lock for "kubernetes-upgrade-881000", held for 2.354524041s
	W1007 05:27:30.693583   13118 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-881000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-881000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:27:30.703133   13118 out.go:201] 
	W1007 05:27:30.707272   13118 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:27:30.707295   13118 out.go:270] * 
	* 
	W1007 05:27:30.709916   13118 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:27:30.717177   13118 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-881000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-881000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-881000: (2.034310334s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-881000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-881000 status --format={{.Host}}: exit status 7 (64.48725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-881000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-881000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.193801958s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-881000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-881000" primary control-plane node in "kubernetes-upgrade-881000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-881000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-881000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:27:32.866900   13149 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:27:32.867074   13149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:27:32.867077   13149 out.go:358] Setting ErrFile to fd 2...
	I1007 05:27:32.867079   13149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:27:32.867209   13149 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:27:32.868310   13149 out.go:352] Setting JSON to false
	I1007 05:27:32.886317   13149 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7023,"bootTime":1728297029,"procs":527,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:27:32.886390   13149 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:27:32.890754   13149 out.go:177] * [kubernetes-upgrade-881000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:27:32.897564   13149 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:27:32.897614   13149 notify.go:220] Checking for updates...
	I1007 05:27:32.904449   13149 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:27:32.907486   13149 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:27:32.910528   13149 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:27:32.919429   13149 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:27:32.929409   13149 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:27:32.933808   13149 config.go:182] Loaded profile config "kubernetes-upgrade-881000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1007 05:27:32.934097   13149 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:27:32.937501   13149 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 05:27:32.944465   13149 start.go:297] selected driver: qemu2
	I1007 05:27:32.944471   13149 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-881000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:27:32.944517   13149 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:27:32.947414   13149 cni.go:84] Creating CNI manager for ""
	I1007 05:27:32.947450   13149 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:27:32.947477   13149 start.go:340] cluster config:
	{Name:kubernetes-upgrade-881000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-881000 Namespace
:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMne
tClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:27:32.952273   13149 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:27:32.960466   13149 out.go:177] * Starting "kubernetes-upgrade-881000" primary control-plane node in "kubernetes-upgrade-881000" cluster
	I1007 05:27:32.964350   13149 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:27:32.964370   13149 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:27:32.964381   13149 cache.go:56] Caching tarball of preloaded images
	I1007 05:27:32.964473   13149 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:27:32.964479   13149 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:27:32.964535   13149 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/kubernetes-upgrade-881000/config.json ...
	I1007 05:27:32.965004   13149 start.go:360] acquireMachinesLock for kubernetes-upgrade-881000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:27:32.965049   13149 start.go:364] duration metric: took 36.791µs to acquireMachinesLock for "kubernetes-upgrade-881000"
	I1007 05:27:32.965060   13149 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:27:32.965066   13149 fix.go:54] fixHost starting: 
	I1007 05:27:32.965199   13149 fix.go:112] recreateIfNeeded on kubernetes-upgrade-881000: state=Stopped err=<nil>
	W1007 05:27:32.965211   13149 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:27:32.969504   13149 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-881000" ...
	I1007 05:27:32.977454   13149 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:27:32.977498   13149 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:18:c0:36:d2:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/disk.qcow2
	I1007 05:27:32.979877   13149 main.go:141] libmachine: STDOUT: 
	I1007 05:27:32.979894   13149 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:27:32.979927   13149 fix.go:56] duration metric: took 14.860041ms for fixHost
	I1007 05:27:32.979932   13149 start.go:83] releasing machines lock for "kubernetes-upgrade-881000", held for 14.877958ms
	W1007 05:27:32.979938   13149 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:27:32.979985   13149 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:27:32.979989   13149 start.go:729] Will try again in 5 seconds ...
	I1007 05:27:37.981956   13149 start.go:360] acquireMachinesLock for kubernetes-upgrade-881000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:27:37.982121   13149 start.go:364] duration metric: took 137.958µs to acquireMachinesLock for "kubernetes-upgrade-881000"
	I1007 05:27:37.982165   13149 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:27:37.982173   13149 fix.go:54] fixHost starting: 
	I1007 05:27:37.982447   13149 fix.go:112] recreateIfNeeded on kubernetes-upgrade-881000: state=Stopped err=<nil>
	W1007 05:27:37.982457   13149 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:27:37.985811   13149 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-881000" ...
	I1007 05:27:37.992604   13149 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:27:37.992656   13149 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:18:c0:36:d2:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubernetes-upgrade-881000/disk.qcow2
	I1007 05:27:37.996433   13149 main.go:141] libmachine: STDOUT: 
	I1007 05:27:37.996458   13149 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:27:37.996488   13149 fix.go:56] duration metric: took 14.315958ms for fixHost
	I1007 05:27:37.996493   13149 start.go:83] releasing machines lock for "kubernetes-upgrade-881000", held for 14.364917ms
	W1007 05:27:37.996564   13149 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-881000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-881000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:27:38.003639   13149 out.go:201] 
	W1007 05:27:38.007736   13149 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:27:38.007753   13149 out.go:270] * 
	* 
	W1007 05:27:38.008610   13149 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:27:38.018710   13149 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-881000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-881000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-881000 version --output=json: exit status 1 (38.315625ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-881000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-10-07 05:27:38.067319 -0700 PDT m=+935.357643710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-881000 -n kubernetes-upgrade-881000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-881000 -n kubernetes-upgrade-881000: exit status 7 (34.039584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-881000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-881000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-881000
--- FAIL: TestKubernetesUpgrade (17.32s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (0.96s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=18424
- KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current60129219/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (0.96s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.96s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=18424
- KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2595415174/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (0.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (575.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2493499436 start -p stopped-upgrade-431000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2493499436 start -p stopped-upgrade-431000 --memory=2200 --vm-driver=qemu2 : (40.42822525s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2493499436 -p stopped-upgrade-431000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2493499436 -p stopped-upgrade-431000 stop: (12.114728041s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-431000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-431000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m42.870156625s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-431000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-431000" primary control-plane node in "stopped-upgrade-431000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-431000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:28:35.590956   13189 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:28:35.591134   13189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:28:35.591138   13189 out.go:358] Setting ErrFile to fd 2...
	I1007 05:28:35.591142   13189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:28:35.591314   13189 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:28:35.592620   13189 out.go:352] Setting JSON to false
	I1007 05:28:35.613616   13189 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7086,"bootTime":1728297029,"procs":527,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:28:35.613680   13189 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:28:35.618627   13189 out.go:177] * [stopped-upgrade-431000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:28:35.626579   13189 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:28:35.626630   13189 notify.go:220] Checking for updates...
	I1007 05:28:35.633589   13189 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:28:35.636537   13189 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:28:35.639564   13189 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:28:35.642585   13189 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:28:35.645493   13189 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:28:35.648925   13189 config.go:182] Loaded profile config "stopped-upgrade-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:28:35.652544   13189 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1007 05:28:35.655525   13189 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:28:35.659516   13189 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 05:28:35.666546   13189 start.go:297] selected driver: qemu2
	I1007 05:28:35.666552   13189 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52462 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1007 05:28:35.666616   13189 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:28:35.669246   13189 cni.go:84] Creating CNI manager for ""
	I1007 05:28:35.669288   13189 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:28:35.669313   13189 start.go:340] cluster config:
	{Name:stopped-upgrade-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52462 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1007 05:28:35.669366   13189 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:28:35.677395   13189 out.go:177] * Starting "stopped-upgrade-431000" primary control-plane node in "stopped-upgrade-431000" cluster
	I1007 05:28:35.681526   13189 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1007 05:28:35.681542   13189 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1007 05:28:35.681551   13189 cache.go:56] Caching tarball of preloaded images
	I1007 05:28:35.681638   13189 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:28:35.681643   13189 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1007 05:28:35.681706   13189 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/config.json ...
	I1007 05:28:35.682159   13189 start.go:360] acquireMachinesLock for stopped-upgrade-431000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:28:35.682207   13189 start.go:364] duration metric: took 42.792µs to acquireMachinesLock for "stopped-upgrade-431000"
	I1007 05:28:35.682215   13189 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:28:35.682220   13189 fix.go:54] fixHost starting: 
	I1007 05:28:35.682336   13189 fix.go:112] recreateIfNeeded on stopped-upgrade-431000: state=Stopped err=<nil>
	W1007 05:28:35.682345   13189 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:28:35.686407   13189 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-431000" ...
	I1007 05:28:35.694558   13189 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:28:35.694628   13189 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/stopped-upgrade-431000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/stopped-upgrade-431000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/stopped-upgrade-431000/qemu.pid -nic user,model=virtio,hostfwd=tcp::52428-:22,hostfwd=tcp::52429-:2376,hostname=stopped-upgrade-431000 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/stopped-upgrade-431000/disk.qcow2
	I1007 05:28:35.744464   13189 main.go:141] libmachine: STDOUT: 
	I1007 05:28:35.744490   13189 main.go:141] libmachine: STDERR: 
	I1007 05:28:35.744512   13189 main.go:141] libmachine: Waiting for VM to start (ssh -p 52428 docker@127.0.0.1)...
	I1007 05:28:55.939687   13189 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/config.json ...
	I1007 05:28:55.940576   13189 machine.go:93] provisionDockerMachine start ...
	I1007 05:28:55.940842   13189 main.go:141] libmachine: Using SSH client type: native
	I1007 05:28:55.941371   13189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032de1f0] 0x1032e0a30 <nil>  [] 0s} localhost 52428 <nil> <nil>}
	I1007 05:28:55.941389   13189 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 05:28:56.024608   13189 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1007 05:28:56.024637   13189 buildroot.go:166] provisioning hostname "stopped-upgrade-431000"
	I1007 05:28:56.024752   13189 main.go:141] libmachine: Using SSH client type: native
	I1007 05:28:56.024936   13189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032de1f0] 0x1032e0a30 <nil>  [] 0s} localhost 52428 <nil> <nil>}
	I1007 05:28:56.024946   13189 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-431000 && echo "stopped-upgrade-431000" | sudo tee /etc/hostname
	I1007 05:28:56.095230   13189 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-431000
	
	I1007 05:28:56.095288   13189 main.go:141] libmachine: Using SSH client type: native
	I1007 05:28:56.095402   13189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032de1f0] 0x1032e0a30 <nil>  [] 0s} localhost 52428 <nil> <nil>}
	I1007 05:28:56.095411   13189 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-431000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-431000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-431000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 05:28:56.158538   13189 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 05:28:56.158553   13189 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18424-10771/.minikube CaCertPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18424-10771/.minikube}
	I1007 05:28:56.158573   13189 buildroot.go:174] setting up certificates
	I1007 05:28:56.158577   13189 provision.go:84] configureAuth start
	I1007 05:28:56.158581   13189 provision.go:143] copyHostCerts
	I1007 05:28:56.158655   13189 exec_runner.go:144] found /Users/jenkins/minikube-integration/18424-10771/.minikube/ca.pem, removing ...
	I1007 05:28:56.158664   13189 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18424-10771/.minikube/ca.pem
	I1007 05:28:56.158767   13189 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18424-10771/.minikube/ca.pem (1082 bytes)
	I1007 05:28:56.158992   13189 exec_runner.go:144] found /Users/jenkins/minikube-integration/18424-10771/.minikube/cert.pem, removing ...
	I1007 05:28:56.158996   13189 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18424-10771/.minikube/cert.pem
	I1007 05:28:56.159040   13189 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18424-10771/.minikube/cert.pem (1123 bytes)
	I1007 05:28:56.159155   13189 exec_runner.go:144] found /Users/jenkins/minikube-integration/18424-10771/.minikube/key.pem, removing ...
	I1007 05:28:56.159160   13189 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18424-10771/.minikube/key.pem
	I1007 05:28:56.159200   13189 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18424-10771/.minikube/key.pem (1675 bytes)
	I1007 05:28:56.159292   13189 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-431000 san=[127.0.0.1 localhost minikube stopped-upgrade-431000]
	I1007 05:28:56.395392   13189 provision.go:177] copyRemoteCerts
	I1007 05:28:56.395457   13189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 05:28:56.395470   13189 sshutil.go:53] new ssh client: &{IP:localhost Port:52428 SSHKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/stopped-upgrade-431000/id_rsa Username:docker}
	I1007 05:28:56.429902   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 05:28:56.437938   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 05:28:56.446100   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1007 05:28:56.453845   13189 provision.go:87] duration metric: took 295.255959ms to configureAuth
	I1007 05:28:56.453858   13189 buildroot.go:189] setting minikube options for container-runtime
	I1007 05:28:56.454004   13189 config.go:182] Loaded profile config "stopped-upgrade-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:28:56.454076   13189 main.go:141] libmachine: Using SSH client type: native
	I1007 05:28:56.454171   13189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032de1f0] 0x1032e0a30 <nil>  [] 0s} localhost 52428 <nil> <nil>}
	I1007 05:28:56.454177   13189 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1007 05:28:56.519277   13189 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1007 05:28:56.519288   13189 buildroot.go:70] root file system type: tmpfs
	I1007 05:28:56.519345   13189 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1007 05:28:56.519416   13189 main.go:141] libmachine: Using SSH client type: native
	I1007 05:28:56.519530   13189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032de1f0] 0x1032e0a30 <nil>  [] 0s} localhost 52428 <nil> <nil>}
	I1007 05:28:56.519563   13189 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1007 05:28:56.583834   13189 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1007 05:28:56.583890   13189 main.go:141] libmachine: Using SSH client type: native
	I1007 05:28:56.584001   13189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032de1f0] 0x1032e0a30 <nil>  [] 0s} localhost 52428 <nil> <nil>}
	I1007 05:28:56.584013   13189 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1007 05:28:56.959177   13189 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1007 05:28:56.959190   13189 machine.go:96] duration metric: took 1.018621292s to provisionDockerMachine
	I1007 05:28:56.959199   13189 start.go:293] postStartSetup for "stopped-upgrade-431000" (driver="qemu2")
	I1007 05:28:56.959205   13189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 05:28:56.959275   13189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 05:28:56.959285   13189 sshutil.go:53] new ssh client: &{IP:localhost Port:52428 SSHKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/stopped-upgrade-431000/id_rsa Username:docker}
	I1007 05:28:56.993700   13189 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 05:28:56.995039   13189 info.go:137] Remote host: Buildroot 2021.02.12
	I1007 05:28:56.995048   13189 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18424-10771/.minikube/addons for local assets ...
	I1007 05:28:56.995124   13189 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18424-10771/.minikube/files for local assets ...
	I1007 05:28:56.995219   13189 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18424-10771/.minikube/files/etc/ssl/certs/112842.pem -> 112842.pem in /etc/ssl/certs
	I1007 05:28:56.995341   13189 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 05:28:56.998498   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/files/etc/ssl/certs/112842.pem --> /etc/ssl/certs/112842.pem (1708 bytes)
	I1007 05:28:57.005848   13189 start.go:296] duration metric: took 46.644916ms for postStartSetup
	I1007 05:28:57.005862   13189 fix.go:56] duration metric: took 21.324037833s for fixHost
	I1007 05:28:57.005910   13189 main.go:141] libmachine: Using SSH client type: native
	I1007 05:28:57.006012   13189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1032de1f0] 0x1032e0a30 <nil>  [] 0s} localhost 52428 <nil> <nil>}
	I1007 05:28:57.006018   13189 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1007 05:28:57.067491   13189 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728304137.548994296
	
	I1007 05:28:57.067501   13189 fix.go:216] guest clock: 1728304137.548994296
	I1007 05:28:57.067505   13189 fix.go:229] Guest: 2024-10-07 05:28:57.548994296 -0700 PDT Remote: 2024-10-07 05:28:57.005864 -0700 PDT m=+21.448427251 (delta=543.130296ms)
	I1007 05:28:57.067516   13189 fix.go:200] guest clock delta is within tolerance: 543.130296ms
	I1007 05:28:57.067519   13189 start.go:83] releasing machines lock for "stopped-upgrade-431000", held for 21.38570325s
	I1007 05:28:57.067600   13189 ssh_runner.go:195] Run: cat /version.json
	I1007 05:28:57.067611   13189 sshutil.go:53] new ssh client: &{IP:localhost Port:52428 SSHKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/stopped-upgrade-431000/id_rsa Username:docker}
	I1007 05:28:57.067683   13189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 05:28:57.067704   13189 sshutil.go:53] new ssh client: &{IP:localhost Port:52428 SSHKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/stopped-upgrade-431000/id_rsa Username:docker}
	W1007 05:28:57.068118   13189 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:52566->127.0.0.1:52428: write: broken pipe
	I1007 05:28:57.068135   13189 retry.go:31] will retry after 316.684788ms: ssh: handshake failed: write tcp 127.0.0.1:52566->127.0.0.1:52428: write: broken pipe
	W1007 05:28:57.425724   13189 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1007 05:28:57.425850   13189 ssh_runner.go:195] Run: systemctl --version
	I1007 05:28:57.429025   13189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1007 05:28:57.431557   13189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1007 05:28:57.431623   13189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1007 05:28:57.435620   13189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1007 05:28:57.441855   13189 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1007 05:28:57.441865   13189 start.go:495] detecting cgroup driver to use...
	I1007 05:28:57.441955   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 05:28:57.449216   13189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1007 05:28:57.453367   13189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1007 05:28:57.456594   13189 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1007 05:28:57.456626   13189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1007 05:28:57.459463   13189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1007 05:28:57.462461   13189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1007 05:28:57.465756   13189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1007 05:28:57.468938   13189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 05:28:57.471870   13189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1007 05:28:57.474746   13189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1007 05:28:57.478257   13189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1007 05:28:57.481644   13189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 05:28:57.484462   13189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 05:28:57.487008   13189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:28:57.570694   13189 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1007 05:28:57.581806   13189 start.go:495] detecting cgroup driver to use...
	I1007 05:28:57.581891   13189 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1007 05:28:57.588711   13189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 05:28:57.592887   13189 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1007 05:28:57.599173   13189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1007 05:28:57.604383   13189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1007 05:28:57.608967   13189 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1007 05:28:57.662112   13189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1007 05:28:57.667879   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 05:28:57.673652   13189 ssh_runner.go:195] Run: which cri-dockerd
	I1007 05:28:57.674871   13189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1007 05:28:57.678046   13189 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1007 05:28:57.683012   13189 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1007 05:28:57.765468   13189 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1007 05:28:57.845559   13189 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1007 05:28:57.845618   13189 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1007 05:28:57.850883   13189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:28:57.933569   13189 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1007 05:28:59.088166   13189 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.154600791s)
	I1007 05:28:59.088239   13189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1007 05:28:59.092932   13189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1007 05:28:59.097475   13189 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1007 05:28:59.170413   13189 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1007 05:28:59.252122   13189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:28:59.329744   13189 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1007 05:28:59.335725   13189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1007 05:28:59.340216   13189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:28:59.422940   13189 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1007 05:28:59.461258   13189 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1007 05:28:59.461360   13189 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1007 05:28:59.464285   13189 start.go:563] Will wait 60s for crictl version
	I1007 05:28:59.464364   13189 ssh_runner.go:195] Run: which crictl
	I1007 05:28:59.465844   13189 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 05:28:59.480795   13189 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1007 05:28:59.480885   13189 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1007 05:28:59.497349   13189 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1007 05:28:59.518369   13189 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1007 05:28:59.518498   13189 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1007 05:28:59.519868   13189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 05:28:59.523445   13189 kubeadm.go:883] updating cluster {Name:stopped-upgrade-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52462 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1007 05:28:59.523490   13189 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1007 05:28:59.523537   13189 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1007 05:28:59.534411   13189 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1007 05:28:59.534420   13189 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1007 05:28:59.534481   13189 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1007 05:28:59.537964   13189 ssh_runner.go:195] Run: which lz4
	I1007 05:28:59.539168   13189 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1007 05:28:59.540463   13189 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1007 05:28:59.540478   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1007 05:29:00.486927   13189 docker.go:649] duration metric: took 947.816083ms to copy over tarball
	I1007 05:29:00.487006   13189 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1007 05:29:01.683273   13189 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.196266916s)
	I1007 05:29:01.683289   13189 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1007 05:29:01.699037   13189 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1007 05:29:01.702429   13189 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1007 05:29:01.707506   13189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:29:01.790911   13189 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1007 05:29:03.436893   13189 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.645996917s)
	I1007 05:29:03.436992   13189 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1007 05:29:03.450576   13189 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1007 05:29:03.450587   13189 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1007 05:29:03.450593   13189 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1007 05:29:03.454810   13189 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:29:03.457081   13189 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:29:03.458356   13189 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:29:03.458484   13189 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:29:03.460262   13189 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:29:03.460282   13189 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:29:03.461688   13189 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:29:03.461840   13189 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:29:03.462626   13189 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:29:03.463213   13189 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:29:03.464518   13189 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1007 05:29:03.464780   13189 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:29:03.465459   13189 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1007 05:29:03.465792   13189 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:29:03.466402   13189 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1007 05:29:03.467235   13189 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1007 05:29:04.033583   13189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:29:04.035567   13189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:29:04.046068   13189 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1007 05:29:04.046110   13189 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:29:04.046172   13189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1007 05:29:04.049155   13189 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1007 05:29:04.049179   13189 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:29:04.049248   13189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1007 05:29:04.064046   13189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1007 05:29:04.066659   13189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1007 05:29:04.076855   13189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:29:04.088172   13189 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1007 05:29:04.088191   13189 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:29:04.088257   13189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1007 05:29:04.098286   13189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1007 05:29:04.118259   13189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:29:04.128192   13189 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1007 05:29:04.128212   13189 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:29:04.128270   13189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1007 05:29:04.138110   13189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	W1007 05:29:04.146294   13189 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1007 05:29:04.146426   13189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:29:04.156293   13189 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1007 05:29:04.156315   13189 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:29:04.156370   13189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1007 05:29:04.165887   13189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1007 05:29:04.166017   13189 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1007 05:29:04.168279   13189 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1007 05:29:04.168289   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1007 05:29:04.209856   13189 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1007 05:29:04.209869   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1007 05:29:04.246895   13189 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1007 05:29:04.253211   13189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1007 05:29:04.263127   13189 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1007 05:29:04.263148   13189 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1007 05:29:04.263210   13189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1007 05:29:04.264259   13189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1007 05:29:04.276483   13189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1007 05:29:04.276638   13189 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1007 05:29:04.282902   13189 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1007 05:29:04.282929   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1007 05:29:04.282981   13189 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1007 05:29:04.282998   13189 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1007 05:29:04.283048   13189 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1007 05:29:04.292064   13189 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1007 05:29:04.292082   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1007 05:29:04.295566   13189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1007 05:29:04.295711   13189 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	W1007 05:29:04.315226   13189 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1007 05:29:04.315331   13189 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:29:04.331014   13189 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1007 05:29:04.331085   13189 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1007 05:29:04.331110   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I1007 05:29:04.331647   13189 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1007 05:29:04.331668   13189 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:29:04.331715   13189 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:29:04.361715   13189 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1007 05:29:04.361900   13189 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1007 05:29:04.372598   13189 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1007 05:29:04.372633   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1007 05:29:04.446709   13189 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1007 05:29:04.446724   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1007 05:29:04.808664   13189 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1007 05:29:04.808687   13189 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1007 05:29:04.808695   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1007 05:29:04.943874   13189 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1007 05:29:04.943919   13189 cache_images.go:92] duration metric: took 1.493347292s to LoadCachedImages
	W1007 05:29:04.943981   13189 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I1007 05:29:04.943987   13189 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1007 05:29:04.944046   13189 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-431000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 05:29:04.944130   13189 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1007 05:29:04.957750   13189 cni.go:84] Creating CNI manager for ""
	I1007 05:29:04.957766   13189 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:29:04.957775   13189 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 05:29:04.957786   13189 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-431000 NodeName:stopped-upgrade-431000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 05:29:04.957851   13189 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-431000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 05:29:04.957921   13189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1007 05:29:04.961801   13189 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 05:29:04.961842   13189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 05:29:04.964983   13189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1007 05:29:04.970107   13189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 05:29:04.975456   13189 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1007 05:29:04.980837   13189 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1007 05:29:04.982045   13189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 05:29:04.985586   13189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:29:05.062687   13189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 05:29:05.068319   13189 certs.go:68] Setting up /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000 for IP: 10.0.2.15
	I1007 05:29:05.068327   13189 certs.go:194] generating shared ca certs ...
	I1007 05:29:05.068336   13189 certs.go:226] acquiring lock for ca certs: {Name:mkc7f2d51afe66903c603984849255f5d4b47504 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:29:05.068511   13189 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/ca.key
	I1007 05:29:05.068551   13189 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/proxy-client-ca.key
	I1007 05:29:05.068559   13189 certs.go:256] generating profile certs ...
	I1007 05:29:05.068620   13189 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/client.key
	I1007 05:29:05.068638   13189 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/apiserver.key.67a38bc6
	I1007 05:29:05.068651   13189 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/apiserver.crt.67a38bc6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1007 05:29:05.125855   13189 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/apiserver.crt.67a38bc6 ...
	I1007 05:29:05.125869   13189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/apiserver.crt.67a38bc6: {Name:mka9eac84c12dce0636ec1fb7e6b06bf09b3c1be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:29:05.126341   13189 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/apiserver.key.67a38bc6 ...
	I1007 05:29:05.126347   13189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/apiserver.key.67a38bc6: {Name:mkb2381e1c0063e6b89ce0166903306a3ddcd99b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:29:05.126558   13189 certs.go:381] copying /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/apiserver.crt.67a38bc6 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/apiserver.crt
	I1007 05:29:05.126681   13189 certs.go:385] copying /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/apiserver.key.67a38bc6 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/apiserver.key
	I1007 05:29:05.126821   13189 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/proxy-client.key
	I1007 05:29:05.126966   13189 certs.go:484] found cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/11284.pem (1338 bytes)
	W1007 05:29:05.126990   13189 certs.go:480] ignoring /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/11284_empty.pem, impossibly tiny 0 bytes
	I1007 05:29:05.126995   13189 certs.go:484] found cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca-key.pem (1675 bytes)
	I1007 05:29:05.127022   13189 certs.go:484] found cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem (1082 bytes)
	I1007 05:29:05.127040   13189 certs.go:484] found cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem (1123 bytes)
	I1007 05:29:05.127057   13189 certs.go:484] found cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/key.pem (1675 bytes)
	I1007 05:29:05.127095   13189 certs.go:484] found cert: /Users/jenkins/minikube-integration/18424-10771/.minikube/files/etc/ssl/certs/112842.pem (1708 bytes)
	I1007 05:29:05.127470   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 05:29:05.134372   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1007 05:29:05.141138   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 05:29:05.147951   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1007 05:29:05.154942   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1007 05:29:05.162075   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 05:29:05.169545   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 05:29:05.177179   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 05:29:05.184484   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 05:29:05.191186   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/11284.pem --> /usr/share/ca-certificates/11284.pem (1338 bytes)
	I1007 05:29:05.198219   13189 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18424-10771/.minikube/files/etc/ssl/certs/112842.pem --> /usr/share/ca-certificates/112842.pem (1708 bytes)
	I1007 05:29:05.205364   13189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 05:29:05.210399   13189 ssh_runner.go:195] Run: openssl version
	I1007 05:29:05.212328   13189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 05:29:05.215063   13189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 05:29:05.216822   13189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:25 /usr/share/ca-certificates/minikubeCA.pem
	I1007 05:29:05.216851   13189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 05:29:05.218501   13189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 05:29:05.221840   13189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11284.pem && ln -fs /usr/share/ca-certificates/11284.pem /etc/ssl/certs/11284.pem"
	I1007 05:29:05.225085   13189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11284.pem
	I1007 05:29:05.226474   13189 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 12:13 /usr/share/ca-certificates/11284.pem
	I1007 05:29:05.226499   13189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11284.pem
	I1007 05:29:05.228329   13189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11284.pem /etc/ssl/certs/51391683.0"
	I1007 05:29:05.231151   13189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/112842.pem && ln -fs /usr/share/ca-certificates/112842.pem /etc/ssl/certs/112842.pem"
	I1007 05:29:05.234388   13189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/112842.pem
	I1007 05:29:05.235796   13189 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 12:13 /usr/share/ca-certificates/112842.pem
	I1007 05:29:05.235820   13189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/112842.pem
	I1007 05:29:05.237489   13189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/112842.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 05:29:05.240492   13189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 05:29:05.241925   13189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 05:29:05.244812   13189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 05:29:05.246645   13189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 05:29:05.248637   13189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 05:29:05.250414   13189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 05:29:05.252223   13189 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 05:29:05.253965   13189 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52462 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1007 05:29:05.254033   13189 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1007 05:29:05.264348   13189 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 05:29:05.267484   13189 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 05:29:05.267489   13189 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 05:29:05.267515   13189 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 05:29:05.270307   13189 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 05:29:05.270593   13189 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-431000" does not appear in /Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:29:05.270693   13189 kubeconfig.go:62] /Users/jenkins/minikube-integration/18424-10771/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-431000" cluster setting kubeconfig missing "stopped-upgrade-431000" context setting]
	I1007 05:29:05.270889   13189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/kubeconfig: {Name:mkfa460adb077498749c83f32a682247504db19f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:29:05.271319   13189 kapi.go:59] client config for stopped-upgrade-431000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/18424-10771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104d33ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 05:29:05.271668   13189 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 05:29:05.274514   13189 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-431000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1007 05:29:05.274521   13189 kubeadm.go:1160] stopping kube-system containers ...
	I1007 05:29:05.274569   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1007 05:29:05.285689   13189 docker.go:483] Stopping containers: [870237c16304 ee10baafa906 d47d3188153e ae6910d4e111 84309b560471 96c0fdc311b8 17f5d7610b4a 6ce14c0f1d79]
	I1007 05:29:05.285765   13189 ssh_runner.go:195] Run: docker stop 870237c16304 ee10baafa906 d47d3188153e ae6910d4e111 84309b560471 96c0fdc311b8 17f5d7610b4a 6ce14c0f1d79
	I1007 05:29:05.297097   13189 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1007 05:29:05.302833   13189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 05:29:05.306082   13189 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 05:29:05.306087   13189 kubeadm.go:157] found existing configuration files:
	
	I1007 05:29:05.306116   13189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/admin.conf
	I1007 05:29:05.309349   13189 kubeadm.go:163] "https://control-plane.minikube.internal:52462" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 05:29:05.309380   13189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 05:29:05.311938   13189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/kubelet.conf
	I1007 05:29:05.314412   13189 kubeadm.go:163] "https://control-plane.minikube.internal:52462" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 05:29:05.314443   13189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 05:29:05.317648   13189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/controller-manager.conf
	I1007 05:29:05.320703   13189 kubeadm.go:163] "https://control-plane.minikube.internal:52462" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 05:29:05.320729   13189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 05:29:05.323162   13189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/scheduler.conf
	I1007 05:29:05.326083   13189 kubeadm.go:163] "https://control-plane.minikube.internal:52462" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 05:29:05.326109   13189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 05:29:05.328991   13189 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 05:29:05.331749   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:29:05.353000   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:29:05.873772   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:29:06.007841   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:29:06.032302   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1007 05:29:06.055719   13189 api_server.go:52] waiting for apiserver process to appear ...
	I1007 05:29:06.055818   13189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 05:29:06.556750   13189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 05:29:07.057881   13189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 05:29:07.062256   13189 api_server.go:72] duration metric: took 1.006557416s to wait for apiserver process to appear ...
	I1007 05:29:07.062272   13189 api_server.go:88] waiting for apiserver healthz status ...
	I1007 05:29:07.062282   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:12.064269   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:12.064316   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:17.064473   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:17.064509   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:22.065069   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:22.065111   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:27.065565   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:27.065581   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:32.066091   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:32.066155   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:37.067003   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:37.067050   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:42.068225   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:42.068269   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:47.068644   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:47.068685   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:52.070186   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:52.070232   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:29:57.072238   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:29:57.072277   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:02.072906   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:02.072943   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:07.075138   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:07.075320   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:30:07.087291   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:30:07.087380   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:30:07.097777   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:30:07.097862   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:30:07.108310   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:30:07.108392   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:30:07.118706   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:30:07.118797   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:30:07.129444   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:30:07.129526   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:30:07.140466   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:30:07.140546   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:30:07.151119   13189 logs.go:282] 0 containers: []
	W1007 05:30:07.151128   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:30:07.151189   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:30:07.161716   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:30:07.161740   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:30:07.161745   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:30:07.173463   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:30:07.173473   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:30:07.190906   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:30:07.190915   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:30:07.205151   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:30:07.205162   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:30:07.247174   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:30:07.247183   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:30:07.261704   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:30:07.261715   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:30:07.273912   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:30:07.273923   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:30:07.300166   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:30:07.300173   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:30:07.408768   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:30:07.408782   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:30:07.423363   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:30:07.423373   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:30:07.434816   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:30:07.434826   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:30:07.446612   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:30:07.446623   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:30:07.483726   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:30:07.483736   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:30:07.487762   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:30:07.487769   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:30:07.502501   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:30:07.502510   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:30:07.518761   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:30:07.518773   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:30:07.537186   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:30:07.537198   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:30:10.050508   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:15.052800   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:15.053090   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:30:15.075807   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:30:15.075923   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:30:15.092410   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:30:15.092495   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:30:15.104401   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:30:15.104484   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:30:15.115406   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:30:15.115498   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:30:15.126807   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:30:15.126878   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:30:15.137368   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:30:15.137471   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:30:15.147966   13189 logs.go:282] 0 containers: []
	W1007 05:30:15.147980   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:30:15.148046   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:30:15.158561   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:30:15.158586   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:30:15.158592   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:30:15.173013   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:30:15.173025   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:30:15.187333   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:30:15.187347   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:30:15.198536   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:30:15.198546   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:30:15.210734   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:30:15.210746   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:30:15.235855   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:30:15.235863   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:30:15.272699   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:30:15.272711   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:30:15.296050   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:30:15.296060   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:30:15.307710   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:30:15.307723   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:30:15.318329   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:30:15.318344   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:30:15.357028   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:30:15.357042   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:30:15.371407   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:30:15.371420   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:30:15.385091   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:30:15.385107   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:30:15.397211   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:30:15.397226   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:30:15.401753   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:30:15.401760   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:30:15.439522   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:30:15.439533   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:30:15.455053   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:30:15.455064   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:30:17.974324   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:22.976473   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:22.976702   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:30:23.000424   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:30:23.000535   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:30:23.020042   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:30:23.020132   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:30:23.032108   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:30:23.032188   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:30:23.042597   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:30:23.042677   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:30:23.054222   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:30:23.054312   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:30:23.065036   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:30:23.065112   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:30:23.075853   13189 logs.go:282] 0 containers: []
	W1007 05:30:23.075864   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:30:23.075935   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:30:23.087961   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:30:23.087979   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:30:23.087984   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:30:23.099127   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:30:23.099138   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:30:23.110305   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:30:23.110319   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:30:23.114350   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:30:23.114357   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:30:23.151365   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:30:23.151377   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:30:23.163140   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:30:23.163151   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:30:23.174743   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:30:23.174752   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:30:23.191659   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:30:23.191669   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:30:23.204793   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:30:23.204808   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:30:23.242721   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:30:23.242732   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:30:23.266087   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:30:23.266097   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:30:23.277837   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:30:23.277848   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:30:23.292533   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:30:23.292544   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:30:23.306384   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:30:23.306396   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:30:23.324072   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:30:23.324082   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:30:23.360585   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:30:23.360596   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:30:23.377710   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:30:23.377724   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:30:25.892022   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:30.894612   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:30.894778   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:30:30.911313   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:30:30.911401   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:30:30.927879   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:30:30.927951   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:30:30.941039   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:30:30.941118   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:30:30.951865   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:30:30.951965   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:30:30.962366   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:30:30.962432   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:30:30.973340   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:30:30.973421   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:30:30.983508   13189 logs.go:282] 0 containers: []
	W1007 05:30:30.983520   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:30:30.983598   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:30:30.994280   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:30:30.994298   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:30:30.994312   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:30:31.009163   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:30:31.009179   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:30:31.020902   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:30:31.020913   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:30:31.037761   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:30:31.037770   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:30:31.049978   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:30:31.049988   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:30:31.074757   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:30:31.074764   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:30:31.110999   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:30:31.111009   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:30:31.122704   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:30:31.122718   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:30:31.144802   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:30:31.144813   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:30:31.158230   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:30:31.158244   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:30:31.172460   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:30:31.172473   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:30:31.177329   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:30:31.177339   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:30:31.216010   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:30:31.216033   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:30:31.230853   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:30:31.230866   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:30:31.268372   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:30:31.268383   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:30:31.279471   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:30:31.279496   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:30:31.292070   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:30:31.292081   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:30:33.806002   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:38.808338   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:38.808558   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:30:38.830976   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:30:38.831068   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:30:38.845603   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:30:38.845685   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:30:38.856510   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:30:38.856587   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:30:38.867208   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:30:38.867302   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:30:38.877761   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:30:38.877850   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:30:38.888726   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:30:38.888810   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:30:38.898625   13189 logs.go:282] 0 containers: []
	W1007 05:30:38.898640   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:30:38.898707   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:30:38.909585   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:30:38.909606   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:30:38.909612   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:30:38.949167   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:30:38.949177   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:30:38.963086   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:30:38.963097   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:30:39.000697   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:30:39.000707   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:30:39.012391   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:30:39.012402   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:30:39.023729   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:30:39.023740   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:30:39.061682   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:30:39.061697   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:30:39.076131   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:30:39.076143   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:30:39.092093   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:30:39.092103   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:30:39.105516   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:30:39.105532   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:30:39.129653   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:30:39.129662   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:30:39.149607   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:30:39.149620   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:30:39.161821   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:30:39.161837   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:30:39.166160   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:30:39.166168   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:30:39.180674   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:30:39.180685   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:30:39.196668   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:30:39.196681   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:30:39.216860   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:30:39.216870   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:30:41.730942   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:46.733188   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:46.733479   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:30:46.760639   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:30:46.760805   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:30:46.778513   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:30:46.778624   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:30:46.793509   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:30:46.793601   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:30:46.804989   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:30:46.805069   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:30:46.815474   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:30:46.815550   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:30:46.825751   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:30:46.825830   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:30:46.836227   13189 logs.go:282] 0 containers: []
	W1007 05:30:46.836240   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:30:46.836304   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:30:46.850731   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:30:46.850749   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:30:46.850755   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:30:46.862372   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:30:46.862386   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:30:46.875824   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:30:46.875838   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:30:46.887676   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:30:46.887688   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:30:46.912317   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:30:46.912325   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:30:46.950713   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:30:46.950725   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:30:46.962115   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:30:46.962125   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:30:46.976814   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:30:46.976828   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:30:46.989226   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:30:46.989238   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:30:47.028055   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:30:47.028074   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:30:47.042207   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:30:47.042221   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:30:47.058777   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:30:47.058788   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:30:47.076334   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:30:47.076345   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:30:47.091075   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:30:47.091087   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:30:47.102672   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:30:47.102682   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:30:47.115111   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:30:47.115124   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:30:47.119374   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:30:47.119381   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:30:49.659369   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:30:54.661502   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:30:54.661838   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:30:54.688523   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:30:54.688670   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:30:54.708795   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:30:54.708896   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:30:54.723717   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:30:54.723793   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:30:54.734597   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:30:54.734676   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:30:54.745704   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:30:54.745783   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:30:54.756784   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:30:54.756862   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:30:54.767324   13189 logs.go:282] 0 containers: []
	W1007 05:30:54.767335   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:30:54.767402   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:30:54.777784   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:30:54.777801   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:30:54.777807   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:30:54.782167   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:30:54.782174   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:30:54.795220   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:30:54.795231   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:30:54.806774   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:30:54.806787   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:30:54.819899   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:30:54.819911   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:30:54.845298   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:30:54.845306   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:30:54.858515   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:30:54.858526   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:30:54.879272   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:30:54.879288   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:30:54.918904   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:30:54.918916   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:30:54.933951   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:30:54.933965   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:30:54.948233   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:30:54.948244   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:30:54.962807   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:30:54.962818   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:30:54.979013   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:30:54.979023   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:30:54.993313   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:30:54.993324   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:30:55.027749   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:30:55.027764   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:30:55.065221   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:30:55.065231   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:30:55.076454   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:30:55.076464   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:30:57.595719   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:02.597910   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:02.598096   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:31:02.614993   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:31:02.615092   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:31:02.631321   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:31:02.631407   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:31:02.641982   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:31:02.642058   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:31:02.652418   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:31:02.652494   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:31:02.662914   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:31:02.662987   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:31:02.673815   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:31:02.673891   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:31:02.684368   13189 logs.go:282] 0 containers: []
	W1007 05:31:02.684388   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:31:02.684452   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:31:02.698221   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:31:02.698238   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:31:02.698243   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:31:02.722017   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:31:02.722025   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:31:02.733582   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:31:02.733597   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:31:02.772005   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:31:02.772014   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:31:02.786264   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:31:02.786274   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:31:02.803998   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:31:02.804009   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:31:02.818553   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:31:02.818562   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:31:02.830058   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:31:02.830068   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:31:02.845339   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:31:02.845349   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:31:02.883648   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:31:02.883660   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:31:02.895504   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:31:02.895517   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:31:02.907076   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:31:02.907087   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:31:02.919864   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:31:02.919876   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:31:02.933658   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:31:02.933671   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:31:02.949687   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:31:02.949700   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:31:02.953812   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:31:02.953819   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:31:02.996431   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:31:02.996442   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:31:05.514396   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:10.516563   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:10.516816   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:31:10.539874   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:31:10.540009   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:31:10.555302   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:31:10.555385   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:31:10.568289   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:31:10.568367   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:31:10.579147   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:31:10.579224   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:31:10.589449   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:31:10.589519   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:31:10.600068   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:31:10.600144   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:31:10.609851   13189 logs.go:282] 0 containers: []
	W1007 05:31:10.609863   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:31:10.609929   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:31:10.620019   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:31:10.620037   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:31:10.620043   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:31:10.659431   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:31:10.659442   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:31:10.674223   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:31:10.674235   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:31:10.691503   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:31:10.691514   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:31:10.703794   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:31:10.703807   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:31:10.715181   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:31:10.715194   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:31:10.719977   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:31:10.719982   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:31:10.757185   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:31:10.757200   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:31:10.771554   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:31:10.771569   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:31:10.787139   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:31:10.787154   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:31:10.805760   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:31:10.805773   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:31:10.819437   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:31:10.819446   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:31:10.833679   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:31:10.833688   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:31:10.845755   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:31:10.845768   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:31:10.885551   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:31:10.885563   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:31:10.899418   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:31:10.899430   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:31:10.922810   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:31:10.922817   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:31:13.437777   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:18.438961   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:18.439103   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:31:18.453018   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:31:18.453104   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:31:18.465095   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:31:18.465176   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:31:18.475349   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:31:18.475429   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:31:18.486165   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:31:18.486250   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:31:18.496911   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:31:18.496988   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:31:18.507578   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:31:18.507656   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:31:18.517664   13189 logs.go:282] 0 containers: []
	W1007 05:31:18.517676   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:31:18.517734   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:31:18.527846   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:31:18.527865   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:31:18.527871   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:31:18.532447   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:31:18.532457   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:31:18.544365   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:31:18.544375   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:31:18.556691   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:31:18.556704   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:31:18.568357   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:31:18.568368   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:31:18.585637   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:31:18.585647   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:31:18.597488   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:31:18.597499   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:31:18.611440   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:31:18.611455   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:31:18.651057   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:31:18.651068   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:31:18.669946   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:31:18.669957   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:31:18.685936   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:31:18.685948   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:31:18.720291   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:31:18.720304   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:31:18.759439   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:31:18.759453   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:31:18.773544   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:31:18.773555   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:31:18.786931   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:31:18.786942   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:31:18.798550   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:31:18.798561   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:31:18.821196   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:31:18.821204   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:31:21.334469   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:26.336667   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:26.336846   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:31:26.350817   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:31:26.350907   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:31:26.365021   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:31:26.365102   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:31:26.376216   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:31:26.376290   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:31:26.386869   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:31:26.386950   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:31:26.396952   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:31:26.397027   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:31:26.407192   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:31:26.407261   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:31:26.421844   13189 logs.go:282] 0 containers: []
	W1007 05:31:26.421855   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:31:26.421923   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:31:26.432457   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:31:26.432474   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:31:26.432480   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:31:26.466811   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:31:26.466826   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:31:26.481388   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:31:26.481400   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:31:26.519162   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:31:26.519172   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:31:26.533241   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:31:26.533252   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:31:26.546023   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:31:26.546034   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:31:26.585016   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:31:26.585028   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:31:26.610110   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:31:26.610123   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:31:26.624555   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:31:26.624568   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:31:26.636156   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:31:26.636171   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:31:26.649672   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:31:26.649680   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:31:26.661027   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:31:26.661039   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:31:26.673163   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:31:26.673173   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:31:26.690406   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:31:26.690417   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:31:26.701472   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:31:26.701487   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:31:26.712692   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:31:26.712703   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:31:26.736276   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:31:26.736283   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:31:29.240956   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:34.243135   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:34.243402   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:31:34.260774   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:31:34.260872   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:31:34.273955   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:31:34.274043   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:31:34.286838   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:31:34.286920   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:31:34.302331   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:31:34.302412   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:31:34.313912   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:31:34.313991   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:31:34.324702   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:31:34.324777   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:31:34.335037   13189 logs.go:282] 0 containers: []
	W1007 05:31:34.335049   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:31:34.335113   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:31:34.345630   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:31:34.345651   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:31:34.345657   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:31:34.357722   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:31:34.357735   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:31:34.392890   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:31:34.392902   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:31:34.407236   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:31:34.407247   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:31:34.423856   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:31:34.423868   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:31:34.438358   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:31:34.438372   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:31:34.450096   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:31:34.450107   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:31:34.463888   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:31:34.463903   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:31:34.479510   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:31:34.479524   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:31:34.494402   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:31:34.494416   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:31:34.505446   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:31:34.505456   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:31:34.529079   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:31:34.529088   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:31:34.533107   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:31:34.533113   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:31:34.570694   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:31:34.570704   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:31:34.585435   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:31:34.585448   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:31:34.622133   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:31:34.622141   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:31:34.639441   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:31:34.639456   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:31:37.152510   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:42.154646   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:42.154802   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:31:42.168101   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:31:42.168188   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:31:42.179799   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:31:42.179875   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:31:42.198063   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:31:42.198140   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:31:42.208761   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:31:42.208837   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:31:42.219922   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:31:42.220007   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:31:42.235089   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:31:42.235165   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:31:42.246817   13189 logs.go:282] 0 containers: []
	W1007 05:31:42.246829   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:31:42.246896   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:31:42.256948   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:31:42.256962   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:31:42.256968   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:31:42.261543   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:31:42.261550   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:31:42.297144   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:31:42.297155   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:31:42.313207   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:31:42.313220   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:31:42.324709   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:31:42.324722   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:31:42.336046   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:31:42.336056   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:31:42.347356   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:31:42.347368   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:31:42.384839   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:31:42.384848   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:31:42.400572   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:31:42.400582   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:31:42.438594   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:31:42.438605   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:31:42.452219   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:31:42.452228   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:31:42.468384   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:31:42.468403   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:31:42.482042   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:31:42.482054   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:31:42.494468   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:31:42.494480   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:31:42.514362   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:31:42.514375   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:31:42.538200   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:31:42.538208   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:31:42.551260   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:31:42.551274   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:31:45.069713   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:50.071449   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:50.071624   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:31:50.090093   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:31:50.090187   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:31:50.102417   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:31:50.102497   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:31:50.112826   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:31:50.112893   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:31:50.126878   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:31:50.126958   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:31:50.137729   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:31:50.137812   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:31:50.148326   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:31:50.148407   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:31:50.161728   13189 logs.go:282] 0 containers: []
	W1007 05:31:50.161739   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:31:50.161804   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:31:50.172441   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:31:50.172461   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:31:50.172466   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:31:50.184685   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:31:50.184698   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:31:50.196351   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:31:50.196362   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:31:50.208132   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:31:50.208143   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:31:50.220633   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:31:50.220645   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:31:50.243353   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:31:50.243364   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:31:50.262440   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:31:50.262451   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:31:50.280042   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:31:50.280057   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:31:50.316781   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:31:50.316792   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:31:50.320934   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:31:50.320940   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:31:50.335411   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:31:50.335421   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:31:50.374546   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:31:50.374559   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:31:50.390162   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:31:50.390172   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:31:50.404499   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:31:50.404509   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:31:50.419703   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:31:50.419712   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:31:50.443228   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:31:50.443236   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:31:50.481203   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:31:50.481215   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:31:52.995360   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:31:57.997610   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:31:57.997852   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:31:58.014410   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:31:58.014506   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:31:58.026826   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:31:58.026908   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:31:58.038462   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:31:58.038544   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:31:58.049482   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:31:58.049557   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:31:58.059957   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:31:58.060033   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:31:58.071039   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:31:58.071122   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:31:58.086524   13189 logs.go:282] 0 containers: []
	W1007 05:31:58.086535   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:31:58.086599   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:31:58.097405   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:31:58.097422   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:31:58.097426   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:31:58.120794   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:31:58.120801   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:31:58.132526   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:31:58.132538   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:31:58.136971   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:31:58.136977   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:31:58.150772   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:31:58.150783   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:31:58.164876   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:31:58.164886   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:31:58.178944   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:31:58.178956   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:31:58.190615   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:31:58.190626   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:31:58.202028   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:31:58.202040   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:31:58.238821   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:31:58.238829   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:31:58.251318   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:31:58.251328   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:31:58.268447   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:31:58.268457   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:31:58.279529   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:31:58.279539   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:31:58.314342   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:31:58.314353   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:31:58.352546   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:31:58.352556   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:31:58.368390   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:31:58.368402   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:31:58.381812   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:31:58.381827   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:32:00.895289   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:32:05.897879   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:32:05.898106   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:32:05.920661   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:32:05.920767   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:32:05.936174   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:32:05.936256   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:32:05.948573   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:32:05.948653   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:32:05.963426   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:32:05.963513   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:32:05.974064   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:32:05.974144   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:32:05.984932   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:32:05.985007   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:32:05.994987   13189 logs.go:282] 0 containers: []
	W1007 05:32:05.995002   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:32:05.995069   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:32:06.005543   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:32:06.005561   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:32:06.005566   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:32:06.043176   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:32:06.043185   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:32:06.057339   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:32:06.057349   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:32:06.068421   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:32:06.068433   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:32:06.084164   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:32:06.084175   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:32:06.123780   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:32:06.123795   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:32:06.140086   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:32:06.140102   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:32:06.157002   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:32:06.157015   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:32:06.174660   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:32:06.174674   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:32:06.193999   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:32:06.194012   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:32:06.205963   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:32:06.205974   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:32:06.217218   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:32:06.217232   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:32:06.234739   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:32:06.234752   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:32:06.239175   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:32:06.239184   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:32:06.274872   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:32:06.274883   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:32:06.286298   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:32:06.286307   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:32:06.300306   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:32:06.300317   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:32:08.826569   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:32:13.827757   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:32:13.827934   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:32:13.842072   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:32:13.842161   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:32:13.853219   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:32:13.853312   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:32:13.863745   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:32:13.863826   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:32:13.875046   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:32:13.875124   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:32:13.885544   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:32:13.885620   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:32:13.896104   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:32:13.896191   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:32:13.906501   13189 logs.go:282] 0 containers: []
	W1007 05:32:13.906514   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:32:13.906572   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:32:13.916699   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:32:13.916717   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:32:13.916724   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:32:13.931096   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:32:13.931107   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:32:13.943351   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:32:13.943364   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:32:13.963321   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:32:13.963338   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:32:13.976822   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:32:13.976832   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:32:14.000852   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:32:14.000861   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:32:14.034600   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:32:14.034611   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:32:14.047532   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:32:14.047542   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:32:14.059067   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:32:14.059080   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:32:14.070882   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:32:14.070892   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:32:14.094408   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:32:14.094422   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:32:14.106659   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:32:14.106674   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:32:14.146102   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:32:14.146109   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:32:14.150165   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:32:14.150174   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:32:14.168873   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:32:14.168886   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:32:14.208492   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:32:14.208506   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:32:14.223448   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:32:14.223459   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:32:16.737521   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:32:21.739804   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:32:21.740045   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:32:21.760589   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:32:21.760698   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:32:21.775465   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:32:21.775550   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:32:21.787984   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:32:21.788052   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:32:21.799231   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:32:21.799315   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:32:21.809955   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:32:21.810021   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:32:21.820126   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:32:21.820197   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:32:21.830712   13189 logs.go:282] 0 containers: []
	W1007 05:32:21.830730   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:32:21.830797   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:32:21.840931   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:32:21.840948   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:32:21.840954   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:32:21.854854   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:32:21.854868   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:32:21.872167   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:32:21.872179   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:32:21.909965   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:32:21.909976   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:32:21.924046   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:32:21.924058   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:32:21.941513   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:32:21.941527   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:32:21.952970   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:32:21.952983   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:32:21.965044   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:32:21.965059   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:32:21.969263   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:32:21.969271   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:32:22.005013   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:32:22.005051   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:32:22.051569   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:32:22.051580   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:32:22.063181   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:32:22.063192   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:32:22.078377   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:32:22.078390   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:32:22.090490   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:32:22.090502   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:32:22.102839   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:32:22.102854   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:32:22.116639   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:32:22.116654   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:32:22.128164   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:32:22.128175   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:32:24.652100   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:32:29.654332   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:32:29.654491   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:32:29.666780   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:32:29.666862   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:32:29.677707   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:32:29.677800   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:32:29.688130   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:32:29.688205   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:32:29.698618   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:32:29.698691   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:32:29.709936   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:32:29.710011   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:32:29.721185   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:32:29.721259   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:32:29.731389   13189 logs.go:282] 0 containers: []
	W1007 05:32:29.731400   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:32:29.731454   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:32:29.741647   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:32:29.741664   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:32:29.741670   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:32:29.764816   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:32:29.764825   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:32:29.776847   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:32:29.776859   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:32:29.792622   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:32:29.792632   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:32:29.804035   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:32:29.804043   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:32:29.825515   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:32:29.825523   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:32:29.840187   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:32:29.840201   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:32:29.878013   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:32:29.878028   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:32:29.895477   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:32:29.895488   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:32:29.909130   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:32:29.909139   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:32:29.920640   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:32:29.920654   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:32:29.932271   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:32:29.932282   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:32:29.936475   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:32:29.936481   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:32:29.948258   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:32:29.948269   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:32:29.963244   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:32:29.963254   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:32:29.998429   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:32:29.998439   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:32:30.013060   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:32:30.013072   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:32:32.554531   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:32:37.556927   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:32:37.557505   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:32:37.597180   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:32:37.597335   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:32:37.619658   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:32:37.619795   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:32:37.636841   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:32:37.636927   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:32:37.650273   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:32:37.650353   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:32:37.664314   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:32:37.664393   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:32:37.676867   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:32:37.676954   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:32:37.692057   13189 logs.go:282] 0 containers: []
	W1007 05:32:37.692073   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:32:37.692134   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:32:37.702521   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:32:37.702539   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:32:37.702545   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:32:37.716495   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:32:37.716507   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:32:37.734615   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:32:37.734625   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:32:37.748674   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:32:37.748684   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:32:37.761117   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:32:37.761134   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:32:37.765397   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:32:37.765405   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:32:37.800764   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:32:37.800774   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:32:37.812324   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:32:37.812333   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:32:37.835771   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:32:37.835786   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:32:37.875716   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:32:37.875742   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:32:37.914591   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:32:37.914605   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:32:37.925645   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:32:37.925660   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:32:37.937725   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:32:37.937738   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:32:37.952850   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:32:37.952859   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:32:37.967131   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:32:37.967147   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:32:37.981893   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:32:37.981907   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:32:37.993821   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:32:37.993832   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:32:40.507221   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:32:45.509011   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:32:45.509502   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:32:45.540714   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:32:45.540860   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:32:45.560026   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:32:45.560153   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:32:45.575154   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:32:45.575229   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:32:45.590692   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:32:45.590776   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:32:45.601664   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:32:45.601740   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:32:45.612653   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:32:45.612732   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:32:45.622648   13189 logs.go:282] 0 containers: []
	W1007 05:32:45.622660   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:32:45.622722   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:32:45.633506   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:32:45.633524   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:32:45.633530   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:32:45.650002   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:32:45.650016   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:32:45.665876   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:32:45.665887   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:32:45.679639   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:32:45.679655   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:32:45.694319   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:32:45.694328   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:32:45.711018   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:32:45.711031   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:32:45.722995   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:32:45.723005   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:32:45.737474   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:32:45.737482   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:32:45.749413   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:32:45.749430   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:32:45.788323   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:32:45.788331   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:32:45.822598   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:32:45.822614   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:32:45.861032   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:32:45.861043   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:32:45.873170   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:32:45.873181   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:32:45.895085   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:32:45.895094   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:32:45.899639   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:32:45.899646   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:32:45.916643   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:32:45.916653   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:32:45.929343   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:32:45.929354   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:32:48.445290   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:32:53.447512   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:32:53.447873   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:32:53.480539   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:32:53.480668   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:32:53.498229   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:32:53.498333   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:32:53.517077   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:32:53.517160   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:32:53.528575   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:32:53.528652   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:32:53.539464   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:32:53.539535   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:32:53.550699   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:32:53.550779   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:32:53.561340   13189 logs.go:282] 0 containers: []
	W1007 05:32:53.561352   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:32:53.561420   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:32:53.572361   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:32:53.572379   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:32:53.572384   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:32:53.609724   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:32:53.609732   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:32:53.623515   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:32:53.623525   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:32:53.635184   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:32:53.635215   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:32:53.654280   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:32:53.654291   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:32:53.688602   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:32:53.688619   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:32:53.702826   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:32:53.702842   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:32:53.717547   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:32:53.717559   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:32:53.734634   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:32:53.734643   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:32:53.786848   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:32:53.786859   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:32:53.803489   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:32:53.803501   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:32:53.818982   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:32:53.818993   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:32:53.832489   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:32:53.832501   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:32:53.845223   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:32:53.845239   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:32:53.849930   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:32:53.849944   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:32:53.862958   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:32:53.862970   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:32:53.877910   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:32:53.877922   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:32:56.404613   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:01.406868   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:01.407165   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:33:01.436015   13189 logs.go:282] 2 containers: [15e4580af5ec 870237c16304]
	I1007 05:33:01.436152   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:33:01.451632   13189 logs.go:282] 2 containers: [07927ea1f52b 84309b560471]
	I1007 05:33:01.451738   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:33:01.464707   13189 logs.go:282] 1 containers: [f42cb7438ff2]
	I1007 05:33:01.464789   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:33:01.475601   13189 logs.go:282] 2 containers: [0b2759bf3bf5 ee10baafa906]
	I1007 05:33:01.475680   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:33:01.486418   13189 logs.go:282] 1 containers: [fe347ea79ada]
	I1007 05:33:01.486497   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:33:01.497794   13189 logs.go:282] 2 containers: [75bbd9740214 d47d3188153e]
	I1007 05:33:01.497865   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:33:01.507784   13189 logs.go:282] 0 containers: []
	W1007 05:33:01.507794   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:33:01.507854   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:33:01.522053   13189 logs.go:282] 2 containers: [a2d0045e17d2 7d92b8ec3cc7]
	I1007 05:33:01.522070   13189 logs.go:123] Gathering logs for etcd [84309b560471] ...
	I1007 05:33:01.522076   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 84309b560471"
	I1007 05:33:01.536576   13189 logs.go:123] Gathering logs for coredns [f42cb7438ff2] ...
	I1007 05:33:01.536587   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f42cb7438ff2"
	I1007 05:33:01.550510   13189 logs.go:123] Gathering logs for kube-proxy [fe347ea79ada] ...
	I1007 05:33:01.550521   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe347ea79ada"
	I1007 05:33:01.561806   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:33:01.561816   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:33:01.596723   13189 logs.go:123] Gathering logs for kube-apiserver [870237c16304] ...
	I1007 05:33:01.596735   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 870237c16304"
	I1007 05:33:01.632863   13189 logs.go:123] Gathering logs for storage-provisioner [a2d0045e17d2] ...
	I1007 05:33:01.632874   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2d0045e17d2"
	I1007 05:33:01.644217   13189 logs.go:123] Gathering logs for storage-provisioner [7d92b8ec3cc7] ...
	I1007 05:33:01.644232   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7d92b8ec3cc7"
	I1007 05:33:01.655431   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:33:01.655444   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:33:01.659734   13189 logs.go:123] Gathering logs for etcd [07927ea1f52b] ...
	I1007 05:33:01.659743   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 07927ea1f52b"
	I1007 05:33:01.674269   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:33:01.674281   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:33:01.697709   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:33:01.697719   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:33:01.737026   13189 logs.go:123] Gathering logs for kube-controller-manager [75bbd9740214] ...
	I1007 05:33:01.737038   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 75bbd9740214"
	I1007 05:33:01.754708   13189 logs.go:123] Gathering logs for kube-scheduler [ee10baafa906] ...
	I1007 05:33:01.754720   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ee10baafa906"
	I1007 05:33:01.769524   13189 logs.go:123] Gathering logs for kube-controller-manager [d47d3188153e] ...
	I1007 05:33:01.769535   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d47d3188153e"
	I1007 05:33:01.783476   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:33:01.783486   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:33:01.795644   13189 logs.go:123] Gathering logs for kube-apiserver [15e4580af5ec] ...
	I1007 05:33:01.795658   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 15e4580af5ec"
	I1007 05:33:01.809838   13189 logs.go:123] Gathering logs for kube-scheduler [0b2759bf3bf5] ...
	I1007 05:33:01.809847   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b2759bf3bf5"
	I1007 05:33:04.323770   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:09.324010   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:09.324090   13189 kubeadm.go:597] duration metric: took 4m4.061115041s to restartPrimaryControlPlane
	W1007 05:33:09.324130   13189 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1007 05:33:09.324144   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1007 05:33:10.362061   13189 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.037925s)
	I1007 05:33:10.362151   13189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 05:33:10.367102   13189 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 05:33:10.370309   13189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 05:33:10.373042   13189 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 05:33:10.373047   13189 kubeadm.go:157] found existing configuration files:
	
	I1007 05:33:10.373071   13189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/admin.conf
	I1007 05:33:10.375499   13189 kubeadm.go:163] "https://control-plane.minikube.internal:52462" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 05:33:10.375532   13189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 05:33:10.378467   13189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/kubelet.conf
	I1007 05:33:10.381628   13189 kubeadm.go:163] "https://control-plane.minikube.internal:52462" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 05:33:10.381658   13189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 05:33:10.384457   13189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/controller-manager.conf
	I1007 05:33:10.386952   13189 kubeadm.go:163] "https://control-plane.minikube.internal:52462" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 05:33:10.386985   13189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 05:33:10.390223   13189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/scheduler.conf
	I1007 05:33:10.393381   13189 kubeadm.go:163] "https://control-plane.minikube.internal:52462" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52462 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 05:33:10.393410   13189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 05:33:10.395974   13189 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1007 05:33:10.412067   13189 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1007 05:33:10.412097   13189 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 05:33:10.461623   13189 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 05:33:10.461681   13189 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 05:33:10.461727   13189 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1007 05:33:10.510685   13189 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 05:33:10.517883   13189 out.go:235]   - Generating certificates and keys ...
	I1007 05:33:10.517920   13189 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 05:33:10.517950   13189 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 05:33:10.517986   13189 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1007 05:33:10.518016   13189 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1007 05:33:10.518060   13189 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1007 05:33:10.518095   13189 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1007 05:33:10.518130   13189 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1007 05:33:10.518163   13189 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1007 05:33:10.518213   13189 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1007 05:33:10.518267   13189 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1007 05:33:10.518286   13189 kubeadm.go:310] [certs] Using the existing "sa" key
	I1007 05:33:10.518318   13189 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 05:33:10.651005   13189 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 05:33:10.738099   13189 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 05:33:10.834311   13189 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 05:33:10.882995   13189 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 05:33:10.913761   13189 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 05:33:10.914368   13189 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 05:33:10.914572   13189 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 05:33:11.004910   13189 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 05:33:11.008886   13189 out.go:235]   - Booting up control plane ...
	I1007 05:33:11.008935   13189 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 05:33:11.008985   13189 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 05:33:11.009028   13189 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 05:33:11.009087   13189 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 05:33:11.009195   13189 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1007 05:33:15.507186   13189 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.500866 seconds
	I1007 05:33:15.507277   13189 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 05:33:15.510753   13189 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 05:33:16.031658   13189 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 05:33:16.031945   13189 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-431000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 05:33:16.539670   13189 kubeadm.go:310] [bootstrap-token] Using token: 1669s0.m3g28gg0e6g0bg5g
	I1007 05:33:16.545737   13189 out.go:235]   - Configuring RBAC rules ...
	I1007 05:33:16.545799   13189 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 05:33:16.550648   13189 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 05:33:16.552542   13189 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 05:33:16.553423   13189 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 05:33:16.554257   13189 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 05:33:16.555292   13189 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 05:33:16.558243   13189 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 05:33:16.712950   13189 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 05:33:16.957140   13189 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 05:33:16.957649   13189 kubeadm.go:310] 
	I1007 05:33:16.957685   13189 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 05:33:16.957688   13189 kubeadm.go:310] 
	I1007 05:33:16.957735   13189 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 05:33:16.957740   13189 kubeadm.go:310] 
	I1007 05:33:16.957755   13189 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 05:33:16.957785   13189 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 05:33:16.957810   13189 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 05:33:16.957812   13189 kubeadm.go:310] 
	I1007 05:33:16.957848   13189 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 05:33:16.957851   13189 kubeadm.go:310] 
	I1007 05:33:16.957904   13189 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 05:33:16.957909   13189 kubeadm.go:310] 
	I1007 05:33:16.957936   13189 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 05:33:16.957985   13189 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 05:33:16.958036   13189 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 05:33:16.958040   13189 kubeadm.go:310] 
	I1007 05:33:16.958096   13189 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 05:33:16.958154   13189 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 05:33:16.958159   13189 kubeadm.go:310] 
	I1007 05:33:16.958199   13189 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1669s0.m3g28gg0e6g0bg5g \
	I1007 05:33:16.958264   13189 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a062c1d11feacd55c1665e5cde1180fa46a0cb1088d7ea40ca5bcc8cf3f8fe8c \
	I1007 05:33:16.958276   13189 kubeadm.go:310] 	--control-plane 
	I1007 05:33:16.958281   13189 kubeadm.go:310] 
	I1007 05:33:16.958327   13189 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 05:33:16.958330   13189 kubeadm.go:310] 
	I1007 05:33:16.958373   13189 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1669s0.m3g28gg0e6g0bg5g \
	I1007 05:33:16.958428   13189 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a062c1d11feacd55c1665e5cde1180fa46a0cb1088d7ea40ca5bcc8cf3f8fe8c 
	I1007 05:33:16.958494   13189 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 05:33:16.958554   13189 cni.go:84] Creating CNI manager for ""
	I1007 05:33:16.958563   13189 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:33:16.962678   13189 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1007 05:33:16.969793   13189 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1007 05:33:16.972893   13189 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1007 05:33:16.977555   13189 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 05:33:16.977612   13189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 05:33:16.977628   13189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-431000 minikube.k8s.io/updated_at=2024_10_07T05_33_16_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=stopped-upgrade-431000 minikube.k8s.io/primary=true
	I1007 05:33:17.016613   13189 ops.go:34] apiserver oom_adj: -16
	I1007 05:33:17.016676   13189 kubeadm.go:1113] duration metric: took 39.107333ms to wait for elevateKubeSystemPrivileges
	I1007 05:33:17.016688   13189 kubeadm.go:394] duration metric: took 4m11.767384375s to StartCluster
	I1007 05:33:17.016698   13189 settings.go:142] acquiring lock: {Name:mk5a4e22b238c18e7ccc84c412018fc85088176f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:33:17.016801   13189 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:33:17.017261   13189 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/kubeconfig: {Name:mkfa460adb077498749c83f32a682247504db19f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:33:17.017465   13189 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:33:17.017470   13189 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 05:33:17.017510   13189 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-431000"
	I1007 05:33:17.017518   13189 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-431000"
	W1007 05:33:17.017521   13189 addons.go:243] addon storage-provisioner should already be in state true
	I1007 05:33:17.017520   13189 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-431000"
	I1007 05:33:17.017534   13189 host.go:66] Checking if "stopped-upgrade-431000" exists ...
	I1007 05:33:17.017538   13189 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-431000"
	I1007 05:33:17.017565   13189 config.go:182] Loaded profile config "stopped-upgrade-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:33:17.017957   13189 retry.go:31] will retry after 1.169099476s: connect: dial unix /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/stopped-upgrade-431000/monitor: connect: connection refused
	I1007 05:33:17.018655   13189 kapi.go:59] client config for stopped-upgrade-431000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/stopped-upgrade-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/18424-10771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104d33ae0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1007 05:33:17.018816   13189 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-431000"
	W1007 05:33:17.018821   13189 addons.go:243] addon default-storageclass should already be in state true
	I1007 05:33:17.018827   13189 host.go:66] Checking if "stopped-upgrade-431000" exists ...
	I1007 05:33:17.019339   13189 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 05:33:17.019344   13189 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 05:33:17.019349   13189 sshutil.go:53] new ssh client: &{IP:localhost Port:52428 SSHKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/stopped-upgrade-431000/id_rsa Username:docker}
	I1007 05:33:17.021646   13189 out.go:177] * Verifying Kubernetes components...
	I1007 05:33:17.029724   13189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 05:33:17.123543   13189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 05:33:17.128760   13189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 05:33:17.130838   13189 api_server.go:52] waiting for apiserver process to appear ...
	I1007 05:33:17.130884   13189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 05:33:17.436442   13189 api_server.go:72] duration metric: took 418.971ms to wait for apiserver process to appear ...
	I1007 05:33:17.436456   13189 api_server.go:88] waiting for apiserver healthz status ...
	I1007 05:33:17.436466   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:17.436572   13189 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1007 05:33:17.436580   13189 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1007 05:33:18.193786   13189 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 05:33:18.197801   13189 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 05:33:18.197808   13189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 05:33:18.197815   13189 sshutil.go:53] new ssh client: &{IP:localhost Port:52428 SSHKeyPath:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/stopped-upgrade-431000/id_rsa Username:docker}
	I1007 05:33:18.235004   13189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 05:33:22.438470   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:22.438510   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:27.438776   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:27.438820   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:32.439168   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:32.439206   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:37.439667   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:37.439713   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:42.440358   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:42.440387   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1007 05:33:47.438126   13189 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1007 05:33:47.441087   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:47.441102   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:47.442447   13189 out.go:177] * Enabled addons: storage-provisioner
	I1007 05:33:47.450315   13189 addons.go:510] duration metric: took 30.433406875s for enable addons: enabled=[storage-provisioner]
	I1007 05:33:52.442040   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:52.442096   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:33:57.443672   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:33:57.443707   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:34:02.445297   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:34:02.445352   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:34:07.447442   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:34:07.447469   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:34:12.448794   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:34:12.448817   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:34:17.450920   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:34:17.451101   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:34:17.462958   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:34:17.463041   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:34:17.473534   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:34:17.473616   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:34:17.484454   13189 logs.go:282] 2 containers: [4b50b80d34ff 8906704c7223]
	I1007 05:34:17.484532   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:34:17.503284   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:34:17.503356   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:34:17.516062   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:34:17.516140   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:34:17.526682   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:34:17.526760   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:34:17.537107   13189 logs.go:282] 0 containers: []
	W1007 05:34:17.537117   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:34:17.537183   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:34:17.547991   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:34:17.548005   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:34:17.548012   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:34:17.582790   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:34:17.582801   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:34:17.597177   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:34:17.597189   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:34:17.611373   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:34:17.611387   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:34:17.626388   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:34:17.626399   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:34:17.644553   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:34:17.644563   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:34:17.669305   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:34:17.669312   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:34:17.674091   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:34:17.674098   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:34:17.708490   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:34:17.708501   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:34:17.720548   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:34:17.720561   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:34:17.731897   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:34:17.731911   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:34:17.744332   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:34:17.744344   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:34:17.755312   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:34:17.755325   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:34:20.267232   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:34:25.267745   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:34:25.267886   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:34:25.280299   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:34:25.280386   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:34:25.292069   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:34:25.292154   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:34:25.302356   13189 logs.go:282] 2 containers: [4b50b80d34ff 8906704c7223]
	I1007 05:34:25.302425   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:34:25.312956   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:34:25.313037   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:34:25.329136   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:34:25.329208   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:34:25.338933   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:34:25.339001   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:34:25.349348   13189 logs.go:282] 0 containers: []
	W1007 05:34:25.349363   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:34:25.349431   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:34:25.360106   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:34:25.360120   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:34:25.360126   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:34:25.365188   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:34:25.365193   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:34:25.379648   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:34:25.379658   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:34:25.404187   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:34:25.404197   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:34:25.419099   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:34:25.419115   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:34:25.431337   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:34:25.431347   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:34:25.449426   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:34:25.449436   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:34:25.485870   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:34:25.485879   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:34:25.520132   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:34:25.520144   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:34:25.534206   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:34:25.534217   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:34:25.545630   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:34:25.545642   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:34:25.558321   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:34:25.558332   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:34:25.569636   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:34:25.569647   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:34:28.085395   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:34:33.087622   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:34:33.087828   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:34:33.101185   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:34:33.101272   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:34:33.112056   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:34:33.112132   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:34:33.123175   13189 logs.go:282] 2 containers: [4b50b80d34ff 8906704c7223]
	I1007 05:34:33.123251   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:34:33.133447   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:34:33.133521   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:34:33.144292   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:34:33.144370   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:34:33.154422   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:34:33.154500   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:34:33.164437   13189 logs.go:282] 0 containers: []
	W1007 05:34:33.164447   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:34:33.164508   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:34:33.176026   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:34:33.176042   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:34:33.176048   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:34:33.212914   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:34:33.212926   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:34:33.227448   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:34:33.227459   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:34:33.243335   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:34:33.243346   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:34:33.254783   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:34:33.254797   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:34:33.289759   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:34:33.289767   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:34:33.305331   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:34:33.305342   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:34:33.316505   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:34:33.316521   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:34:33.330989   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:34:33.330999   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:34:33.344180   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:34:33.344190   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:34:33.361315   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:34:33.361325   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:34:33.386190   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:34:33.386197   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:34:33.397534   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:34:33.397544   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:34:35.904341   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:34:40.906908   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:34:40.907103   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:34:40.922673   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:34:40.922772   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:34:40.936922   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:34:40.936993   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:34:40.947622   13189 logs.go:282] 2 containers: [4b50b80d34ff 8906704c7223]
	I1007 05:34:40.947703   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:34:40.957790   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:34:40.957863   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:34:40.969109   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:34:40.969187   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:34:40.980236   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:34:40.980312   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:34:40.991115   13189 logs.go:282] 0 containers: []
	W1007 05:34:40.991126   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:34:40.991188   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:34:41.001542   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:34:41.001556   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:34:41.001562   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:34:41.012920   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:34:41.012929   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:34:41.027810   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:34:41.027819   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:34:41.051309   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:34:41.051318   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:34:41.062923   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:34:41.062936   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:34:41.076693   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:34:41.076708   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:34:41.081773   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:34:41.081780   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:34:41.116385   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:34:41.116399   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:34:41.132252   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:34:41.132266   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:34:41.144068   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:34:41.144081   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:34:41.155234   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:34:41.155244   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:34:41.172370   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:34:41.172383   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:34:41.185284   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:34:41.185299   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:34:43.719450   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:34:48.721763   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:34:48.722379   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:34:48.761136   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:34:48.761290   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:34:48.787231   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:34:48.787337   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:34:48.801860   13189 logs.go:282] 2 containers: [4b50b80d34ff 8906704c7223]
	I1007 05:34:48.801958   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:34:48.814059   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:34:48.814138   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:34:48.824832   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:34:48.824914   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:34:48.835799   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:34:48.835868   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:34:48.845769   13189 logs.go:282] 0 containers: []
	W1007 05:34:48.845783   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:34:48.845849   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:34:48.856584   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:34:48.856599   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:34:48.856604   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:34:48.892397   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:34:48.892404   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:34:48.897057   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:34:48.897066   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:34:48.934830   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:34:48.934844   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:34:48.949504   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:34:48.949517   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:34:48.963619   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:34:48.963631   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:34:48.975505   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:34:48.975517   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:34:48.987567   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:34:48.987579   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:34:48.999575   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:34:48.999587   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:34:49.014115   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:34:49.014126   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:34:49.025788   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:34:49.025800   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:34:49.043248   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:34:49.043259   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:34:49.054709   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:34:49.054722   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:34:51.579659   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:34:56.582420   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:34:56.582968   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:34:56.622453   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:34:56.622611   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:34:56.643103   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:34:56.643221   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:34:56.658317   13189 logs.go:282] 2 containers: [4b50b80d34ff 8906704c7223]
	I1007 05:34:56.658402   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:34:56.670498   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:34:56.670578   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:34:56.680953   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:34:56.681027   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:34:56.691406   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:34:56.691486   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:34:56.702958   13189 logs.go:282] 0 containers: []
	W1007 05:34:56.702969   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:34:56.703034   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:34:56.713692   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:34:56.713706   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:34:56.713711   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:34:56.725523   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:34:56.725539   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:34:56.737731   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:34:56.737744   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:34:56.741929   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:34:56.741936   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:34:56.756512   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:34:56.756521   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:34:56.774113   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:34:56.774124   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:34:56.785700   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:34:56.785713   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:34:56.807438   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:34:56.807448   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:34:56.830829   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:34:56.830836   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:34:56.863785   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:34:56.863794   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:34:56.899017   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:34:56.899027   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:34:56.910484   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:34:56.910494   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:34:56.930076   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:34:56.930090   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:34:59.444907   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:35:04.447615   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:35:04.448148   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:35:04.503714   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:35:04.503852   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:35:04.521492   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:35:04.521587   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:35:04.535075   13189 logs.go:282] 2 containers: [4b50b80d34ff 8906704c7223]
	I1007 05:35:04.535153   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:35:04.546396   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:35:04.546461   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:35:04.560794   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:35:04.560870   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:35:04.571838   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:35:04.571916   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:35:04.582242   13189 logs.go:282] 0 containers: []
	W1007 05:35:04.582253   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:35:04.582319   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:35:04.592472   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:35:04.592487   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:35:04.592493   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:35:04.606762   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:35:04.606771   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:35:04.618908   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:35:04.618918   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:35:04.630051   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:35:04.630061   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:35:04.652542   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:35:04.652549   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:35:04.663993   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:35:04.664003   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:35:04.668315   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:35:04.668324   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:35:04.702316   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:35:04.702329   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:35:04.717125   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:35:04.717137   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:35:04.729149   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:35:04.729161   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:35:04.749977   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:35:04.749987   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:35:04.766898   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:35:04.766909   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:35:04.800161   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:35:04.800173   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:35:07.316322   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:35:12.319169   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:35:12.319706   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:35:12.358887   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:35:12.359051   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:35:12.379737   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:35:12.379838   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:35:12.404769   13189 logs.go:282] 2 containers: [4b50b80d34ff 8906704c7223]
	I1007 05:35:12.404851   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:35:12.416577   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:35:12.416656   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:35:12.427418   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:35:12.427497   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:35:12.442466   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:35:12.442542   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:35:12.452866   13189 logs.go:282] 0 containers: []
	W1007 05:35:12.452883   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:35:12.452950   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:35:12.463948   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:35:12.463968   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:35:12.463974   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:35:12.499917   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:35:12.499931   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:35:12.514183   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:35:12.514196   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:35:12.525809   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:35:12.525823   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:35:12.540064   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:35:12.540076   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:35:12.562392   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:35:12.562401   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:35:12.579974   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:35:12.579985   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:35:12.592625   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:35:12.592640   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:35:12.625903   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:35:12.625912   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:35:12.629900   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:35:12.629909   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:35:12.644465   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:35:12.644475   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:35:12.655845   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:35:12.655857   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:35:12.667381   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:35:12.667391   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:35:15.191007   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:35:20.193254   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:35:20.193532   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:35:20.218168   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:35:20.218288   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:35:20.234842   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:35:20.234935   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:35:20.248159   13189 logs.go:282] 2 containers: [4b50b80d34ff 8906704c7223]
	I1007 05:35:20.248242   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:35:20.259311   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:35:20.259378   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:35:20.274607   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:35:20.274670   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:35:20.285166   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:35:20.285245   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:35:20.295278   13189 logs.go:282] 0 containers: []
	W1007 05:35:20.295290   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:35:20.295360   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:35:20.305393   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:35:20.305409   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:35:20.305415   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:35:20.309655   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:35:20.309664   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:35:20.323437   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:35:20.323449   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:35:20.334610   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:35:20.334620   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:35:20.346756   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:35:20.346767   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:35:20.369660   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:35:20.369669   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:35:20.387465   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:35:20.387474   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:35:20.399374   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:35:20.399385   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:35:20.434731   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:35:20.434739   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:35:20.469433   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:35:20.469444   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:35:20.491088   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:35:20.491101   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:35:20.502574   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:35:20.502584   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:35:20.517054   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:35:20.517064   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:35:23.028463   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:35:28.031072   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:35:28.031557   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:35:28.071959   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:35:28.072140   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:35:28.093823   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:35:28.093959   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:35:28.110467   13189 logs.go:282] 2 containers: [4b50b80d34ff 8906704c7223]
	I1007 05:35:28.110542   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:35:28.122641   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:35:28.122728   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:35:28.133581   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:35:28.133648   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:35:28.145422   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:35:28.145484   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:35:28.159621   13189 logs.go:282] 0 containers: []
	W1007 05:35:28.159632   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:35:28.159698   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:35:28.170881   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:35:28.170897   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:35:28.170903   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:35:28.209872   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:35:28.209884   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:35:28.224045   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:35:28.224059   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:35:28.246457   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:35:28.246469   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:35:28.258376   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:35:28.258386   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:35:28.274435   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:35:28.274448   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:35:28.298348   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:35:28.298360   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:35:28.316522   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:35:28.316534   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:35:28.358213   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:35:28.358234   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:35:28.362777   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:35:28.362785   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:35:28.394835   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:35:28.394846   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:35:28.406351   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:35:28.406364   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:35:28.428477   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:35:28.428491   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:35:30.952574   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:35:35.954876   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:35:35.955398   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:35:35.995751   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:35:35.995912   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:35:36.017820   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:35:36.017945   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:35:36.033582   13189 logs.go:282] 4 containers: [af6d8f7f18fb 6a2e3a0485c9 4b50b80d34ff 8906704c7223]
	I1007 05:35:36.033669   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:35:36.046257   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:35:36.046331   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:35:36.056751   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:35:36.056820   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:35:36.068752   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:35:36.068820   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:35:36.079475   13189 logs.go:282] 0 containers: []
	W1007 05:35:36.079485   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:35:36.079548   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:35:36.089914   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:35:36.089932   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:35:36.089938   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:35:36.103895   13189 logs.go:123] Gathering logs for coredns [6a2e3a0485c9] ...
	I1007 05:35:36.103905   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2e3a0485c9"
	I1007 05:35:36.115785   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:35:36.115798   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:35:36.127566   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:35:36.127579   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:35:36.163272   13189 logs.go:123] Gathering logs for coredns [af6d8f7f18fb] ...
	I1007 05:35:36.163281   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6d8f7f18fb"
	I1007 05:35:36.175540   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:35:36.175552   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:35:36.187507   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:35:36.187520   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:35:36.211422   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:35:36.211434   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:35:36.245492   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:35:36.245503   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:35:36.265564   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:35:36.265577   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:35:36.278162   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:35:36.278175   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:35:36.292310   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:35:36.292322   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:35:36.310823   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:35:36.310837   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:35:36.332816   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:35:36.332826   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:35:36.344542   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:35:36.344554   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:35:38.851264   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:35:43.852877   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:35:43.853445   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:35:43.892474   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:35:43.892634   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:35:43.914678   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:35:43.914811   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:35:43.931918   13189 logs.go:282] 4 containers: [af6d8f7f18fb 6a2e3a0485c9 4b50b80d34ff 8906704c7223]
	I1007 05:35:43.932015   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:35:43.945138   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:35:43.945209   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:35:43.967566   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:35:43.967645   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:35:43.978505   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:35:43.978583   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:35:43.993273   13189 logs.go:282] 0 containers: []
	W1007 05:35:43.993285   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:35:43.993352   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:35:44.003915   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:35:44.003934   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:35:44.003939   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:35:44.015489   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:35:44.015501   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:35:44.032670   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:35:44.032680   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:35:44.047162   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:35:44.047174   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:35:44.065009   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:35:44.065022   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:35:44.076776   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:35:44.076789   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:35:44.088066   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:35:44.088078   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:35:44.099639   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:35:44.099648   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:35:44.134141   13189 logs.go:123] Gathering logs for coredns [6a2e3a0485c9] ...
	I1007 05:35:44.134152   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2e3a0485c9"
	I1007 05:35:44.145449   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:35:44.145461   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:35:44.157093   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:35:44.157103   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:35:44.182734   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:35:44.182748   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:35:44.187895   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:35:44.187905   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:35:44.222487   13189 logs.go:123] Gathering logs for coredns [af6d8f7f18fb] ...
	I1007 05:35:44.222497   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6d8f7f18fb"
	I1007 05:35:44.233801   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:35:44.233810   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:35:46.751836   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:35:51.754108   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:35:51.754786   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:35:51.795475   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:35:51.795621   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:35:51.817077   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:35:51.817221   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:35:51.832534   13189 logs.go:282] 4 containers: [af6d8f7f18fb 6a2e3a0485c9 4b50b80d34ff 8906704c7223]
	I1007 05:35:51.832616   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:35:51.844619   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:35:51.844700   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:35:51.855895   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:35:51.855962   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:35:51.866937   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:35:51.867013   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:35:51.877267   13189 logs.go:282] 0 containers: []
	W1007 05:35:51.877279   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:35:51.877348   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:35:51.888239   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:35:51.888260   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:35:51.888266   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:35:51.904091   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:35:51.904101   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:35:51.921570   13189 logs.go:123] Gathering logs for coredns [6a2e3a0485c9] ...
	I1007 05:35:51.921583   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2e3a0485c9"
	I1007 05:35:51.933249   13189 logs.go:123] Gathering logs for coredns [af6d8f7f18fb] ...
	I1007 05:35:51.933261   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6d8f7f18fb"
	I1007 05:35:51.948327   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:35:51.948343   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:35:51.962730   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:35:51.962742   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:35:51.978064   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:35:51.978079   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:35:51.996239   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:35:51.996251   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:35:52.000745   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:35:52.000753   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:35:52.037297   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:35:52.037307   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:35:52.052060   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:35:52.052071   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:35:52.068355   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:35:52.068366   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:35:52.080229   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:35:52.080244   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:35:52.092084   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:35:52.092093   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:35:52.116849   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:35:52.116858   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:35:54.653330   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:35:59.656103   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:35:59.656620   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:35:59.687516   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:35:59.687653   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:35:59.706807   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:35:59.706912   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:35:59.721485   13189 logs.go:282] 4 containers: [af6d8f7f18fb 6a2e3a0485c9 4b50b80d34ff 8906704c7223]
	I1007 05:35:59.721573   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:35:59.733041   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:35:59.733117   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:35:59.746810   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:35:59.746880   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:35:59.757146   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:35:59.757224   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:35:59.772282   13189 logs.go:282] 0 containers: []
	W1007 05:35:59.772293   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:35:59.772351   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:35:59.788325   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:35:59.788343   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:35:59.788349   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:35:59.823663   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:35:59.823673   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:35:59.835518   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:35:59.835532   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:35:59.850513   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:35:59.850524   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:35:59.862616   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:35:59.862627   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:35:59.867521   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:35:59.867529   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:35:59.885555   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:35:59.885567   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:35:59.899190   13189 logs.go:123] Gathering logs for coredns [af6d8f7f18fb] ...
	I1007 05:35:59.899200   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6d8f7f18fb"
	I1007 05:35:59.911520   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:35:59.911530   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:35:59.947457   13189 logs.go:123] Gathering logs for coredns [6a2e3a0485c9] ...
	I1007 05:35:59.947471   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2e3a0485c9"
	I1007 05:35:59.959872   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:35:59.959884   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:35:59.974691   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:35:59.974703   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:36:00.000511   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:36:00.000529   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:36:00.012747   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:36:00.012760   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:36:00.024428   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:36:00.024441   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:36:02.543631   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:36:07.545864   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:36:07.545953   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:36:07.558873   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:36:07.558929   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:36:07.569480   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:36:07.569546   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:36:07.581030   13189 logs.go:282] 4 containers: [af6d8f7f18fb 6a2e3a0485c9 4b50b80d34ff 8906704c7223]
	I1007 05:36:07.581102   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:36:07.593095   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:36:07.593165   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:36:07.604460   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:36:07.604527   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:36:07.615514   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:36:07.615574   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:36:07.626257   13189 logs.go:282] 0 containers: []
	W1007 05:36:07.626270   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:36:07.626336   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:36:07.638289   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:36:07.638304   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:36:07.638309   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:36:07.654850   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:36:07.654862   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:36:07.672906   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:36:07.672917   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:36:07.697846   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:36:07.697859   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:36:07.711565   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:36:07.711573   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:36:07.726828   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:36:07.726844   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:36:07.742664   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:36:07.742677   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:36:07.756179   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:36:07.756190   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:36:07.790505   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:36:07.790520   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:36:07.804957   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:36:07.804968   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:36:07.817931   13189 logs.go:123] Gathering logs for coredns [6a2e3a0485c9] ...
	I1007 05:36:07.817942   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2e3a0485c9"
	I1007 05:36:07.836608   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:36:07.836619   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:36:07.854649   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:36:07.854658   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:36:07.859538   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:36:07.859544   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:36:07.897516   13189 logs.go:123] Gathering logs for coredns [af6d8f7f18fb] ...
	I1007 05:36:07.897529   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6d8f7f18fb"
	I1007 05:36:10.410669   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:36:15.412384   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:36:15.412568   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:36:15.425088   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:36:15.425166   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:36:15.435575   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:36:15.435648   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:36:15.445974   13189 logs.go:282] 4 containers: [af6d8f7f18fb 6a2e3a0485c9 4b50b80d34ff 8906704c7223]
	I1007 05:36:15.446043   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:36:15.456445   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:36:15.456518   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:36:15.467318   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:36:15.467381   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:36:15.478061   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:36:15.478126   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:36:15.487666   13189 logs.go:282] 0 containers: []
	W1007 05:36:15.487677   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:36:15.487733   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:36:15.497817   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:36:15.497836   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:36:15.497841   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:36:15.531080   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:36:15.531089   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:36:15.535652   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:36:15.535660   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:36:15.570560   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:36:15.570570   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:36:15.584536   13189 logs.go:123] Gathering logs for coredns [af6d8f7f18fb] ...
	I1007 05:36:15.584550   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6d8f7f18fb"
	I1007 05:36:15.597386   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:36:15.597399   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:36:15.609013   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:36:15.609025   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:36:15.620671   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:36:15.620684   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:36:15.641386   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:36:15.641399   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:36:15.664261   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:36:15.664267   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:36:15.675866   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:36:15.675876   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:36:15.693150   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:36:15.693160   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:36:15.704394   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:36:15.704404   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:36:15.718981   13189 logs.go:123] Gathering logs for coredns [6a2e3a0485c9] ...
	I1007 05:36:15.718994   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2e3a0485c9"
	I1007 05:36:15.730372   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:36:15.730381   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:36:18.247523   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:36:23.249783   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:36:23.250262   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:36:23.280250   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:36:23.280384   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:36:23.298717   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:36:23.298806   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:36:23.313990   13189 logs.go:282] 4 containers: [af6d8f7f18fb 6a2e3a0485c9 4b50b80d34ff 8906704c7223]
	I1007 05:36:23.314076   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:36:23.325378   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:36:23.325457   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:36:23.337205   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:36:23.337281   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:36:23.348636   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:36:23.348699   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:36:23.358874   13189 logs.go:282] 0 containers: []
	W1007 05:36:23.358888   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:36:23.358953   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:36:23.369661   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:36:23.369677   13189 logs.go:123] Gathering logs for coredns [af6d8f7f18fb] ...
	I1007 05:36:23.369682   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6d8f7f18fb"
	I1007 05:36:23.381055   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:36:23.381065   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:36:23.395693   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:36:23.395704   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:36:23.400210   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:36:23.400218   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:36:23.435382   13189 logs.go:123] Gathering logs for coredns [6a2e3a0485c9] ...
	I1007 05:36:23.435393   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2e3a0485c9"
	I1007 05:36:23.447440   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:36:23.447450   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:36:23.459075   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:36:23.459085   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:36:23.480986   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:36:23.480996   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:36:23.514545   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:36:23.514559   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:36:23.528456   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:36:23.528467   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:36:23.539900   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:36:23.539909   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:36:23.551573   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:36:23.551583   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:36:23.575137   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:36:23.575144   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:36:23.586559   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:36:23.586569   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:36:23.600302   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:36:23.600311   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:36:26.114288   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:36:31.116963   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:36:31.117065   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:36:31.130694   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:36:31.130759   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:36:31.142862   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:36:31.142959   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:36:31.154120   13189 logs.go:282] 4 containers: [af6d8f7f18fb 6a2e3a0485c9 4b50b80d34ff 8906704c7223]
	I1007 05:36:31.154209   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:36:31.165502   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:36:31.165580   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:36:31.177944   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:36:31.178043   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:36:31.189275   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:36:31.189341   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:36:31.200462   13189 logs.go:282] 0 containers: []
	W1007 05:36:31.200477   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:36:31.200549   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:36:31.212160   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:36:31.212175   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:36:31.212181   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:36:31.247546   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:36:31.247565   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:36:31.252662   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:36:31.252673   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:36:31.267791   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:36:31.267803   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:36:31.280231   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:36:31.280244   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:36:31.305182   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:36:31.305200   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:36:31.321889   13189 logs.go:123] Gathering logs for coredns [af6d8f7f18fb] ...
	I1007 05:36:31.321897   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6d8f7f18fb"
	I1007 05:36:31.336632   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:36:31.336644   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:36:31.350321   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:36:31.350331   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:36:31.367177   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:36:31.367188   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:36:31.407161   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:36:31.407175   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:36:31.419386   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:36:31.419395   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:36:31.432218   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:36:31.432234   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:36:31.447581   13189 logs.go:123] Gathering logs for coredns [6a2e3a0485c9] ...
	I1007 05:36:31.447592   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2e3a0485c9"
	I1007 05:36:31.460464   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:36:31.460473   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:36:33.980762   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:36:38.983288   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:36:38.983771   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:36:39.018876   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:36:39.019036   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:36:39.039563   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:36:39.039675   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:36:39.054270   13189 logs.go:282] 4 containers: [af6d8f7f18fb 6a2e3a0485c9 4b50b80d34ff 8906704c7223]
	I1007 05:36:39.054338   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:36:39.065789   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:36:39.065859   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:36:39.076234   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:36:39.076298   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:36:39.086577   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:36:39.086653   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:36:39.097143   13189 logs.go:282] 0 containers: []
	W1007 05:36:39.097155   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:36:39.097217   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:36:39.107672   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:36:39.107689   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:36:39.107694   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:36:39.140907   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:36:39.140916   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:36:39.152242   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:36:39.152255   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:36:39.169524   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:36:39.169536   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:36:39.174654   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:36:39.174661   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:36:39.209122   13189 logs.go:123] Gathering logs for coredns [af6d8f7f18fb] ...
	I1007 05:36:39.209132   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6d8f7f18fb"
	I1007 05:36:39.224883   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:36:39.224897   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:36:39.236492   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:36:39.236500   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:36:39.247898   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:36:39.247910   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:36:39.266051   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:36:39.266064   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:36:39.279648   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:36:39.279658   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:36:39.294583   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:36:39.294592   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:36:39.319614   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:36:39.319622   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:36:39.330963   13189 logs.go:123] Gathering logs for coredns [6a2e3a0485c9] ...
	I1007 05:36:39.330975   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2e3a0485c9"
	I1007 05:36:39.342351   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:36:39.342363   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:36:41.855869   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:36:46.858648   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:36:46.858893   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:36:46.885117   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:36:46.885251   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:36:46.901335   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:36:46.901425   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:36:46.914144   13189 logs.go:282] 4 containers: [af6d8f7f18fb 6a2e3a0485c9 4b50b80d34ff 8906704c7223]
	I1007 05:36:46.914223   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:36:46.925038   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:36:46.925111   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:36:46.935369   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:36:46.935445   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:36:46.946272   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:36:46.946343   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:36:46.956260   13189 logs.go:282] 0 containers: []
	W1007 05:36:46.956271   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:36:46.956332   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:36:46.966600   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:36:46.966617   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:36:46.966625   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:36:46.971542   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:36:46.971552   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:36:47.005631   13189 logs.go:123] Gathering logs for coredns [6a2e3a0485c9] ...
	I1007 05:36:47.005642   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2e3a0485c9"
	I1007 05:36:47.021070   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:36:47.021081   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:36:47.036126   13189 logs.go:123] Gathering logs for coredns [af6d8f7f18fb] ...
	I1007 05:36:47.036137   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6d8f7f18fb"
	I1007 05:36:47.050526   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:36:47.050535   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:36:47.061942   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:36:47.061954   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:36:47.073209   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:36:47.073222   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:36:47.095568   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:36:47.095575   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:36:47.133527   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:36:47.133542   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:36:47.151596   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:36:47.151609   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:36:47.166518   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:36:47.166533   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:36:47.179390   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:36:47.179401   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:36:47.191634   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:36:47.191643   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:36:47.209045   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:36:47.209058   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:36:49.723234   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:36:54.726142   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:36:54.726599   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:36:54.759376   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:36:54.759505   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:36:54.778941   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:36:54.779043   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:36:54.794982   13189 logs.go:282] 4 containers: [af6d8f7f18fb 6a2e3a0485c9 4b50b80d34ff 8906704c7223]
	I1007 05:36:54.795067   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:36:54.806680   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:36:54.806756   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:36:54.816676   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:36:54.816746   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:36:54.828524   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:36:54.828604   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:36:54.838953   13189 logs.go:282] 0 containers: []
	W1007 05:36:54.838964   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:36:54.839020   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:36:54.852494   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:36:54.852512   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:36:54.852519   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:36:54.856853   13189 logs.go:123] Gathering logs for coredns [6a2e3a0485c9] ...
	I1007 05:36:54.856859   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2e3a0485c9"
	I1007 05:36:54.868850   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:36:54.868860   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:36:54.887687   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:36:54.887700   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:36:54.899695   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:36:54.899709   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:36:54.921419   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:36:54.921430   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:36:54.933284   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:36:54.933298   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:36:54.967459   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:36:54.967471   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:36:54.982229   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:36:54.982242   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:36:54.994656   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:36:54.994668   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:36:55.006633   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:36:55.006646   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:36:55.031332   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:36:55.031341   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:36:55.066619   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:36:55.066629   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:36:55.083572   13189 logs.go:123] Gathering logs for coredns [af6d8f7f18fb] ...
	I1007 05:36:55.083582   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6d8f7f18fb"
	I1007 05:36:55.095362   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:36:55.095374   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:36:57.608142   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:37:02.610332   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:37:02.610843   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:37:02.645648   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:37:02.645789   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:37:02.666355   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:37:02.666463   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:37:02.682681   13189 logs.go:282] 4 containers: [af6d8f7f18fb 6a2e3a0485c9 4b50b80d34ff 8906704c7223]
	I1007 05:37:02.682758   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:37:02.694755   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:37:02.694830   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:37:02.705178   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:37:02.705254   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:37:02.715305   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:37:02.715381   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:37:02.725295   13189 logs.go:282] 0 containers: []
	W1007 05:37:02.725306   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:37:02.725370   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:37:02.735336   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:37:02.735352   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:37:02.735357   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:37:02.753015   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:37:02.753028   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:37:02.764545   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:37:02.764559   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:37:02.778043   13189 logs.go:123] Gathering logs for coredns [af6d8f7f18fb] ...
	I1007 05:37:02.778053   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6d8f7f18fb"
	I1007 05:37:02.797970   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:37:02.797981   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:37:02.809468   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:37:02.809481   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:37:02.822996   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:37:02.823008   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:37:02.827675   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:37:02.827684   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:37:02.839346   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:37:02.839358   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:37:02.853712   13189 logs.go:123] Gathering logs for coredns [6a2e3a0485c9] ...
	I1007 05:37:02.853723   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2e3a0485c9"
	I1007 05:37:02.865608   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:37:02.865620   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:37:02.880325   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:37:02.880336   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:37:02.902712   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:37:02.902721   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:37:02.914291   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:37:02.914302   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:37:02.947928   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:37:02.947938   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:37:05.489847   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:37:10.490571   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:37:10.490745   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1007 05:37:10.512940   13189 logs.go:282] 1 containers: [ac3a1616acaa]
	I1007 05:37:10.513089   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1007 05:37:10.529521   13189 logs.go:282] 1 containers: [18c6fb74abde]
	I1007 05:37:10.529599   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1007 05:37:10.541957   13189 logs.go:282] 4 containers: [af6d8f7f18fb 6a2e3a0485c9 4b50b80d34ff 8906704c7223]
	I1007 05:37:10.542031   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1007 05:37:10.552798   13189 logs.go:282] 1 containers: [8fd8be14d9d6]
	I1007 05:37:10.552865   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1007 05:37:10.562815   13189 logs.go:282] 1 containers: [cdb3c7fd5ea6]
	I1007 05:37:10.562885   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1007 05:37:10.573289   13189 logs.go:282] 1 containers: [8333122c189e]
	I1007 05:37:10.573354   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1007 05:37:10.583200   13189 logs.go:282] 0 containers: []
	W1007 05:37:10.583287   13189 logs.go:284] No container was found matching "kindnet"
	I1007 05:37:10.583348   13189 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1007 05:37:10.599276   13189 logs.go:282] 1 containers: [9a20855d165a]
	I1007 05:37:10.599300   13189 logs.go:123] Gathering logs for storage-provisioner [9a20855d165a] ...
	I1007 05:37:10.599305   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9a20855d165a"
	I1007 05:37:10.610980   13189 logs.go:123] Gathering logs for coredns [6a2e3a0485c9] ...
	I1007 05:37:10.610989   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a2e3a0485c9"
	I1007 05:37:10.622854   13189 logs.go:123] Gathering logs for coredns [4b50b80d34ff] ...
	I1007 05:37:10.622863   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b50b80d34ff"
	I1007 05:37:10.634277   13189 logs.go:123] Gathering logs for coredns [8906704c7223] ...
	I1007 05:37:10.634287   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8906704c7223"
	I1007 05:37:10.647714   13189 logs.go:123] Gathering logs for kube-controller-manager [8333122c189e] ...
	I1007 05:37:10.647726   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8333122c189e"
	I1007 05:37:10.665046   13189 logs.go:123] Gathering logs for kube-apiserver [ac3a1616acaa] ...
	I1007 05:37:10.665057   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac3a1616acaa"
	I1007 05:37:10.680048   13189 logs.go:123] Gathering logs for coredns [af6d8f7f18fb] ...
	I1007 05:37:10.680058   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 af6d8f7f18fb"
	I1007 05:37:10.691340   13189 logs.go:123] Gathering logs for kube-proxy [cdb3c7fd5ea6] ...
	I1007 05:37:10.691349   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb3c7fd5ea6"
	I1007 05:37:10.703451   13189 logs.go:123] Gathering logs for kubelet ...
	I1007 05:37:10.703461   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 05:37:10.736862   13189 logs.go:123] Gathering logs for dmesg ...
	I1007 05:37:10.736869   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 05:37:10.740697   13189 logs.go:123] Gathering logs for describe nodes ...
	I1007 05:37:10.740706   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 05:37:10.774719   13189 logs.go:123] Gathering logs for kube-scheduler [8fd8be14d9d6] ...
	I1007 05:37:10.774735   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8fd8be14d9d6"
	I1007 05:37:10.791675   13189 logs.go:123] Gathering logs for etcd [18c6fb74abde] ...
	I1007 05:37:10.791685   13189 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 18c6fb74abde"
	I1007 05:37:10.805181   13189 logs.go:123] Gathering logs for Docker ...
	I1007 05:37:10.805195   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1007 05:37:10.827647   13189 logs.go:123] Gathering logs for container status ...
	I1007 05:37:10.827655   13189 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 05:37:13.341016   13189 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1007 05:37:18.343858   13189 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1007 05:37:18.355160   13189 out.go:201] 
	W1007 05:37:18.360221   13189 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1007 05:37:18.360248   13189 out.go:270] * 
	* 
	W1007 05:37:18.362835   13189 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:37:18.378035   13189 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-431000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (575.54s)

                                                
                                    
x
+
TestPause/serial/Start (9.96s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-832000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-832000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.898151334s)

                                                
                                                
-- stdout --
	* [pause-832000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-832000" primary control-plane node in "pause-832000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-832000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-832000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-832000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-832000 -n pause-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-832000 -n pause-832000: exit status 7 (56.435833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-090000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-090000 --driver=qemu2 : exit status 80 (9.73732375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-090000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-090000" primary control-plane node in "NoKubernetes-090000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-090000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-090000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-090000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-090000 -n NoKubernetes-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-090000 -n NoKubernetes-090000: exit status 7 (38.256875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-090000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-090000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-090000 --no-kubernetes --driver=qemu2 : exit status 80 (5.81832525s)

                                                
                                                
-- stdout --
	* [NoKubernetes-090000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-090000
	* Restarting existing qemu2 VM for "NoKubernetes-090000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-090000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-090000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-090000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-090000 -n NoKubernetes-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-090000 -n NoKubernetes-090000: exit status 7 (72.211166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-090000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-090000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-090000 --no-kubernetes --driver=qemu2 : exit status 80 (5.768439834s)

                                                
                                                
-- stdout --
	* [NoKubernetes-090000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-090000
	* Restarting existing qemu2 VM for "NoKubernetes-090000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-090000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-090000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-090000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-090000 -n NoKubernetes-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-090000 -n NoKubernetes-090000: exit status 7 (57.509125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-090000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-090000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-090000 --driver=qemu2 : exit status 80 (5.8135765s)

                                                
                                                
-- stdout --
	* [NoKubernetes-090000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-090000
	* Restarting existing qemu2 VM for "NoKubernetes-090000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-090000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-090000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-090000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-090000 -n NoKubernetes-090000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-090000 -n NoKubernetes-090000: exit status 7 (72.235416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-090000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-585000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-585000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.7873695s)

                                                
                                                
-- stdout --
	* [auto-585000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-585000" primary control-plane node in "auto-585000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-585000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:35:56.128659   13713 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:35:56.128845   13713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:35:56.128848   13713 out.go:358] Setting ErrFile to fd 2...
	I1007 05:35:56.128850   13713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:35:56.128985   13713 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:35:56.130186   13713 out.go:352] Setting JSON to false
	I1007 05:35:56.147924   13713 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7527,"bootTime":1728297029,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:35:56.148008   13713 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:35:56.153546   13713 out.go:177] * [auto-585000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:35:56.157466   13713 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:35:56.157511   13713 notify.go:220] Checking for updates...
	I1007 05:35:56.165449   13713 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:35:56.168498   13713 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:35:56.171482   13713 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:35:56.174530   13713 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:35:56.177516   13713 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:35:56.180885   13713 config.go:182] Loaded profile config "multinode-062000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:35:56.180966   13713 config.go:182] Loaded profile config "stopped-upgrade-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:35:56.181011   13713 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:35:56.185497   13713 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:35:56.192445   13713 start.go:297] selected driver: qemu2
	I1007 05:35:56.192450   13713 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:35:56.192456   13713 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:35:56.195090   13713 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:35:56.198484   13713 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:35:56.201624   13713 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:35:56.201649   13713 cni.go:84] Creating CNI manager for ""
	I1007 05:35:56.201673   13713 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:35:56.201678   13713 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 05:35:56.201722   13713 start.go:340] cluster config:
	{Name:auto-585000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_cli
ent SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:35:56.206550   13713 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:35:56.215446   13713 out.go:177] * Starting "auto-585000" primary control-plane node in "auto-585000" cluster
	I1007 05:35:56.218488   13713 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:35:56.218512   13713 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:35:56.218521   13713 cache.go:56] Caching tarball of preloaded images
	I1007 05:35:56.218614   13713 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:35:56.218619   13713 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:35:56.218702   13713 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/auto-585000/config.json ...
	I1007 05:35:56.218712   13713 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/auto-585000/config.json: {Name:mk35e779c8908e5603bfa91e1a96eab42a49072b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:35:56.219039   13713 start.go:360] acquireMachinesLock for auto-585000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:35:56.219080   13713 start.go:364] duration metric: took 36.583µs to acquireMachinesLock for "auto-585000"
	I1007 05:35:56.219092   13713 start.go:93] Provisioning new machine with config: &{Name:auto-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:auto-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:35:56.219118   13713 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:35:56.226396   13713 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:35:56.241228   13713 start.go:159] libmachine.API.Create for "auto-585000" (driver="qemu2")
	I1007 05:35:56.241254   13713 client.go:168] LocalClient.Create starting
	I1007 05:35:56.241320   13713 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:35:56.241361   13713 main.go:141] libmachine: Decoding PEM data...
	I1007 05:35:56.241369   13713 main.go:141] libmachine: Parsing certificate...
	I1007 05:35:56.241414   13713 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:35:56.241442   13713 main.go:141] libmachine: Decoding PEM data...
	I1007 05:35:56.241449   13713 main.go:141] libmachine: Parsing certificate...
	I1007 05:35:56.241879   13713 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:35:56.384229   13713 main.go:141] libmachine: Creating SSH key...
	I1007 05:35:56.513817   13713 main.go:141] libmachine: Creating Disk image...
	I1007 05:35:56.513829   13713 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:35:56.514026   13713 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/auto-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/auto-585000/disk.qcow2
	I1007 05:35:56.523974   13713 main.go:141] libmachine: STDOUT: 
	I1007 05:35:56.523999   13713 main.go:141] libmachine: STDERR: 
	I1007 05:35:56.524069   13713 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/auto-585000/disk.qcow2 +20000M
	I1007 05:35:56.532984   13713 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:35:56.532999   13713 main.go:141] libmachine: STDERR: 
	I1007 05:35:56.533014   13713 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/auto-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/auto-585000/disk.qcow2
	I1007 05:35:56.533019   13713 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:35:56.533031   13713 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:35:56.533063   13713 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/auto-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/auto-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/auto-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:af:66:9c:1e:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/auto-585000/disk.qcow2
	I1007 05:35:56.534974   13713 main.go:141] libmachine: STDOUT: 
	I1007 05:35:56.534997   13713 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:35:56.535019   13713 client.go:171] duration metric: took 293.76575ms to LocalClient.Create
	I1007 05:35:58.537119   13713 start.go:128] duration metric: took 2.318027084s to createHost
	I1007 05:35:58.537169   13713 start.go:83] releasing machines lock for "auto-585000", held for 2.318125416s
	W1007 05:35:58.537209   13713 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:35:58.550203   13713 out.go:177] * Deleting "auto-585000" in qemu2 ...
	W1007 05:35:58.565990   13713 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:35:58.566001   13713 start.go:729] Will try again in 5 seconds ...
	I1007 05:36:03.568054   13713 start.go:360] acquireMachinesLock for auto-585000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:36:03.568463   13713 start.go:364] duration metric: took 339.208µs to acquireMachinesLock for "auto-585000"
	I1007 05:36:03.568516   13713 start.go:93] Provisioning new machine with config: &{Name:auto-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:auto-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:36:03.568874   13713 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:36:03.572526   13713 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:36:03.614254   13713 start.go:159] libmachine.API.Create for "auto-585000" (driver="qemu2")
	I1007 05:36:03.614305   13713 client.go:168] LocalClient.Create starting
	I1007 05:36:03.614431   13713 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:36:03.614520   13713 main.go:141] libmachine: Decoding PEM data...
	I1007 05:36:03.614534   13713 main.go:141] libmachine: Parsing certificate...
	I1007 05:36:03.614605   13713 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:36:03.614674   13713 main.go:141] libmachine: Decoding PEM data...
	I1007 05:36:03.614690   13713 main.go:141] libmachine: Parsing certificate...
	I1007 05:36:03.615291   13713 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:36:03.766057   13713 main.go:141] libmachine: Creating SSH key...
	I1007 05:36:03.822335   13713 main.go:141] libmachine: Creating Disk image...
	I1007 05:36:03.822341   13713 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:36:03.822543   13713 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/auto-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/auto-585000/disk.qcow2
	I1007 05:36:03.832642   13713 main.go:141] libmachine: STDOUT: 
	I1007 05:36:03.832672   13713 main.go:141] libmachine: STDERR: 
	I1007 05:36:03.832727   13713 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/auto-585000/disk.qcow2 +20000M
	I1007 05:36:03.841289   13713 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:36:03.841309   13713 main.go:141] libmachine: STDERR: 
	I1007 05:36:03.841321   13713 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/auto-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/auto-585000/disk.qcow2
	I1007 05:36:03.841327   13713 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:36:03.841347   13713 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:36:03.841374   13713 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/auto-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/auto-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/auto-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:75:a3:a2:5d:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/auto-585000/disk.qcow2
	I1007 05:36:03.843203   13713 main.go:141] libmachine: STDOUT: 
	I1007 05:36:03.843218   13713 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:36:03.843229   13713 client.go:171] duration metric: took 228.924041ms to LocalClient.Create
	I1007 05:36:05.845418   13713 start.go:128] duration metric: took 2.276545042s to createHost
	I1007 05:36:05.845530   13713 start.go:83] releasing machines lock for "auto-585000", held for 2.27709075s
	W1007 05:36:05.845956   13713 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:36:05.855549   13713 out.go:201] 
	W1007 05:36:05.860742   13713 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:36:05.860783   13713 out.go:270] * 
	* 
	W1007 05:36:05.863400   13713 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:36:05.870669   13713 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-585000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-585000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.786063375s)

                                                
                                                
-- stdout --
	* [kindnet-585000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-585000" primary control-plane node in "kindnet-585000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-585000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:36:08.295228   13825 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:36:08.295375   13825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:36:08.295378   13825 out.go:358] Setting ErrFile to fd 2...
	I1007 05:36:08.295380   13825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:36:08.295497   13825 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:36:08.296685   13825 out.go:352] Setting JSON to false
	I1007 05:36:08.314503   13825 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7539,"bootTime":1728297029,"procs":528,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:36:08.314574   13825 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:36:08.319909   13825 out.go:177] * [kindnet-585000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:36:08.328075   13825 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:36:08.328153   13825 notify.go:220] Checking for updates...
	I1007 05:36:08.335037   13825 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:36:08.338089   13825 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:36:08.340947   13825 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:36:08.344021   13825 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:36:08.347059   13825 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:36:08.348833   13825 config.go:182] Loaded profile config "multinode-062000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:36:08.348910   13825 config.go:182] Loaded profile config "stopped-upgrade-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:36:08.348960   13825 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:36:08.353024   13825 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:36:08.359862   13825 start.go:297] selected driver: qemu2
	I1007 05:36:08.359867   13825 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:36:08.359872   13825 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:36:08.362262   13825 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:36:08.364982   13825 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:36:08.368143   13825 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:36:08.368169   13825 cni.go:84] Creating CNI manager for "kindnet"
	I1007 05:36:08.368177   13825 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 05:36:08.368212   13825 start.go:340] cluster config:
	{Name:kindnet-585000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/soc
ket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:36:08.372825   13825 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:36:08.380994   13825 out.go:177] * Starting "kindnet-585000" primary control-plane node in "kindnet-585000" cluster
	I1007 05:36:08.385049   13825 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:36:08.385062   13825 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:36:08.385068   13825 cache.go:56] Caching tarball of preloaded images
	I1007 05:36:08.385139   13825 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:36:08.385144   13825 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:36:08.385211   13825 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/kindnet-585000/config.json ...
	I1007 05:36:08.385222   13825 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/kindnet-585000/config.json: {Name:mkf557d23bd7caff567a170e1b8324299821dba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:36:08.385477   13825 start.go:360] acquireMachinesLock for kindnet-585000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:36:08.385525   13825 start.go:364] duration metric: took 41.75µs to acquireMachinesLock for "kindnet-585000"
	I1007 05:36:08.385538   13825 start.go:93] Provisioning new machine with config: &{Name:kindnet-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:kindnet-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:36:08.385572   13825 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:36:08.392959   13825 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:36:08.409636   13825 start.go:159] libmachine.API.Create for "kindnet-585000" (driver="qemu2")
	I1007 05:36:08.409667   13825 client.go:168] LocalClient.Create starting
	I1007 05:36:08.409748   13825 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:36:08.409786   13825 main.go:141] libmachine: Decoding PEM data...
	I1007 05:36:08.409799   13825 main.go:141] libmachine: Parsing certificate...
	I1007 05:36:08.409843   13825 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:36:08.409872   13825 main.go:141] libmachine: Decoding PEM data...
	I1007 05:36:08.409881   13825 main.go:141] libmachine: Parsing certificate...
	I1007 05:36:08.410305   13825 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:36:08.553823   13825 main.go:141] libmachine: Creating SSH key...
	I1007 05:36:08.687919   13825 main.go:141] libmachine: Creating Disk image...
	I1007 05:36:08.687928   13825 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:36:08.688140   13825 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kindnet-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kindnet-585000/disk.qcow2
	I1007 05:36:08.697964   13825 main.go:141] libmachine: STDOUT: 
	I1007 05:36:08.698060   13825 main.go:141] libmachine: STDERR: 
	I1007 05:36:08.698125   13825 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kindnet-585000/disk.qcow2 +20000M
	I1007 05:36:08.706629   13825 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:36:08.706715   13825 main.go:141] libmachine: STDERR: 
	I1007 05:36:08.706733   13825 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kindnet-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kindnet-585000/disk.qcow2
	I1007 05:36:08.706738   13825 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:36:08.706757   13825 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:36:08.706792   13825 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kindnet-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kindnet-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kindnet-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:12:ae:1e:8b:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kindnet-585000/disk.qcow2
	I1007 05:36:08.708599   13825 main.go:141] libmachine: STDOUT: 
	I1007 05:36:08.708652   13825 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:36:08.708674   13825 client.go:171] duration metric: took 299.004291ms to LocalClient.Create
	I1007 05:36:10.710959   13825 start.go:128] duration metric: took 2.325387625s to createHost
	I1007 05:36:10.711058   13825 start.go:83] releasing machines lock for "kindnet-585000", held for 2.325566709s
	W1007 05:36:10.711104   13825 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:36:10.722091   13825 out.go:177] * Deleting "kindnet-585000" in qemu2 ...
	W1007 05:36:10.750076   13825 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:36:10.750115   13825 start.go:729] Will try again in 5 seconds ...
	I1007 05:36:15.751499   13825 start.go:360] acquireMachinesLock for kindnet-585000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:36:15.751668   13825 start.go:364] duration metric: took 147.25µs to acquireMachinesLock for "kindnet-585000"
	I1007 05:36:15.751683   13825 start.go:93] Provisioning new machine with config: &{Name:kindnet-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:kindnet-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:36:15.751726   13825 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:36:15.760981   13825 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:36:15.775755   13825 start.go:159] libmachine.API.Create for "kindnet-585000" (driver="qemu2")
	I1007 05:36:15.775786   13825 client.go:168] LocalClient.Create starting
	I1007 05:36:15.775855   13825 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:36:15.775895   13825 main.go:141] libmachine: Decoding PEM data...
	I1007 05:36:15.775904   13825 main.go:141] libmachine: Parsing certificate...
	I1007 05:36:15.775942   13825 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:36:15.775974   13825 main.go:141] libmachine: Decoding PEM data...
	I1007 05:36:15.775980   13825 main.go:141] libmachine: Parsing certificate...
	I1007 05:36:15.776357   13825 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:36:15.919831   13825 main.go:141] libmachine: Creating SSH key...
	I1007 05:36:15.982602   13825 main.go:141] libmachine: Creating Disk image...
	I1007 05:36:15.982610   13825 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:36:15.982815   13825 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kindnet-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kindnet-585000/disk.qcow2
	I1007 05:36:15.993072   13825 main.go:141] libmachine: STDOUT: 
	I1007 05:36:15.993102   13825 main.go:141] libmachine: STDERR: 
	I1007 05:36:15.993174   13825 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kindnet-585000/disk.qcow2 +20000M
	I1007 05:36:16.002123   13825 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:36:16.002137   13825 main.go:141] libmachine: STDERR: 
	I1007 05:36:16.002153   13825 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kindnet-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kindnet-585000/disk.qcow2
	I1007 05:36:16.002159   13825 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:36:16.002170   13825 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:36:16.002209   13825 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kindnet-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kindnet-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kindnet-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:bc:a2:47:df:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kindnet-585000/disk.qcow2
	I1007 05:36:16.004106   13825 main.go:141] libmachine: STDOUT: 
	I1007 05:36:16.004118   13825 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:36:16.004140   13825 client.go:171] duration metric: took 228.352041ms to LocalClient.Create
	I1007 05:36:18.006284   13825 start.go:128] duration metric: took 2.254572084s to createHost
	I1007 05:36:18.006385   13825 start.go:83] releasing machines lock for "kindnet-585000", held for 2.254748958s
	W1007 05:36:18.006768   13825 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:36:18.018505   13825 out.go:201] 
	W1007 05:36:18.022560   13825 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:36:18.022588   13825 out.go:270] * 
	* 
	W1007 05:36:18.024960   13825 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:36:18.035386   13825 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-585000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-585000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.960955583s)

                                                
                                                
-- stdout --
	* [calico-585000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-585000" primary control-plane node in "calico-585000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-585000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:36:20.543584   13938 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:36:20.543762   13938 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:36:20.543765   13938 out.go:358] Setting ErrFile to fd 2...
	I1007 05:36:20.543771   13938 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:36:20.543924   13938 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:36:20.545141   13938 out.go:352] Setting JSON to false
	I1007 05:36:20.563824   13938 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7551,"bootTime":1728297029,"procs":527,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:36:20.563921   13938 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:36:20.569407   13938 out.go:177] * [calico-585000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:36:20.577423   13938 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:36:20.577507   13938 notify.go:220] Checking for updates...
	I1007 05:36:20.584373   13938 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:36:20.591348   13938 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:36:20.594371   13938 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:36:20.597354   13938 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:36:20.601411   13938 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:36:20.604656   13938 config.go:182] Loaded profile config "multinode-062000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:36:20.604734   13938 config.go:182] Loaded profile config "stopped-upgrade-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:36:20.604789   13938 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:36:20.608336   13938 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:36:20.616263   13938 start.go:297] selected driver: qemu2
	I1007 05:36:20.616269   13938 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:36:20.616274   13938 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:36:20.618617   13938 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:36:20.623361   13938 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:36:20.627395   13938 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:36:20.627423   13938 cni.go:84] Creating CNI manager for "calico"
	I1007 05:36:20.627435   13938 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1007 05:36:20.627484   13938 start.go:340] cluster config:
	{Name:calico-585000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:36:20.632166   13938 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:36:20.635426   13938 out.go:177] * Starting "calico-585000" primary control-plane node in "calico-585000" cluster
	I1007 05:36:20.643348   13938 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:36:20.643364   13938 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:36:20.643372   13938 cache.go:56] Caching tarball of preloaded images
	I1007 05:36:20.643451   13938 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:36:20.643464   13938 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:36:20.643525   13938 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/calico-585000/config.json ...
	I1007 05:36:20.643541   13938 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/calico-585000/config.json: {Name:mke4cb6f1a42b8f57dbfbe9c8f8661be745e3a00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:36:20.643769   13938 start.go:360] acquireMachinesLock for calico-585000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:36:20.643813   13938 start.go:364] duration metric: took 39.625µs to acquireMachinesLock for "calico-585000"
	I1007 05:36:20.643826   13938 start.go:93] Provisioning new machine with config: &{Name:calico-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:calico-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:36:20.643848   13938 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:36:20.647391   13938 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:36:20.661854   13938 start.go:159] libmachine.API.Create for "calico-585000" (driver="qemu2")
	I1007 05:36:20.661879   13938 client.go:168] LocalClient.Create starting
	I1007 05:36:20.661946   13938 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:36:20.661982   13938 main.go:141] libmachine: Decoding PEM data...
	I1007 05:36:20.661993   13938 main.go:141] libmachine: Parsing certificate...
	I1007 05:36:20.662037   13938 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:36:20.662065   13938 main.go:141] libmachine: Decoding PEM data...
	I1007 05:36:20.662077   13938 main.go:141] libmachine: Parsing certificate...
	I1007 05:36:20.662474   13938 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:36:20.803310   13938 main.go:141] libmachine: Creating SSH key...
	I1007 05:36:20.904259   13938 main.go:141] libmachine: Creating Disk image...
	I1007 05:36:20.904267   13938 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:36:20.904471   13938 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/calico-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/calico-585000/disk.qcow2
	I1007 05:36:20.914153   13938 main.go:141] libmachine: STDOUT: 
	I1007 05:36:20.914174   13938 main.go:141] libmachine: STDERR: 
	I1007 05:36:20.914237   13938 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/calico-585000/disk.qcow2 +20000M
	I1007 05:36:20.922723   13938 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:36:20.922738   13938 main.go:141] libmachine: STDERR: 
	I1007 05:36:20.922752   13938 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/calico-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/calico-585000/disk.qcow2
	I1007 05:36:20.922758   13938 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:36:20.922770   13938 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:36:20.922810   13938 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/calico-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/calico-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/calico-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:a5:1b:11:48:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/calico-585000/disk.qcow2
	I1007 05:36:20.924685   13938 main.go:141] libmachine: STDOUT: 
	I1007 05:36:20.924700   13938 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:36:20.924721   13938 client.go:171] duration metric: took 262.839125ms to LocalClient.Create
	I1007 05:36:22.926979   13938 start.go:128] duration metric: took 2.283108458s to createHost
	I1007 05:36:22.927082   13938 start.go:83] releasing machines lock for "calico-585000", held for 2.283300875s
	W1007 05:36:22.927136   13938 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:36:22.939211   13938 out.go:177] * Deleting "calico-585000" in qemu2 ...
	W1007 05:36:22.962139   13938 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:36:22.962167   13938 start.go:729] Will try again in 5 seconds ...
	I1007 05:36:27.964326   13938 start.go:360] acquireMachinesLock for calico-585000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:36:27.964919   13938 start.go:364] duration metric: took 477.459µs to acquireMachinesLock for "calico-585000"
	I1007 05:36:27.965021   13938 start.go:93] Provisioning new machine with config: &{Name:calico-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:calico-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:36:27.965312   13938 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:36:27.972031   13938 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:36:28.020484   13938 start.go:159] libmachine.API.Create for "calico-585000" (driver="qemu2")
	I1007 05:36:28.020550   13938 client.go:168] LocalClient.Create starting
	I1007 05:36:28.020689   13938 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:36:28.020776   13938 main.go:141] libmachine: Decoding PEM data...
	I1007 05:36:28.020794   13938 main.go:141] libmachine: Parsing certificate...
	I1007 05:36:28.020866   13938 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:36:28.020923   13938 main.go:141] libmachine: Decoding PEM data...
	I1007 05:36:28.020934   13938 main.go:141] libmachine: Parsing certificate...
	I1007 05:36:28.021519   13938 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:36:28.175853   13938 main.go:141] libmachine: Creating SSH key...
	I1007 05:36:28.407670   13938 main.go:141] libmachine: Creating Disk image...
	I1007 05:36:28.407680   13938 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:36:28.407913   13938 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/calico-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/calico-585000/disk.qcow2
	I1007 05:36:28.418293   13938 main.go:141] libmachine: STDOUT: 
	I1007 05:36:28.418318   13938 main.go:141] libmachine: STDERR: 
	I1007 05:36:28.418388   13938 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/calico-585000/disk.qcow2 +20000M
	I1007 05:36:28.427131   13938 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:36:28.427147   13938 main.go:141] libmachine: STDERR: 
	I1007 05:36:28.427166   13938 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/calico-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/calico-585000/disk.qcow2
	I1007 05:36:28.427172   13938 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:36:28.427182   13938 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:36:28.427219   13938 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/calico-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/calico-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/calico-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:c8:8a:56:2e:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/calico-585000/disk.qcow2
	I1007 05:36:28.429031   13938 main.go:141] libmachine: STDOUT: 
	I1007 05:36:28.429047   13938 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:36:28.429062   13938 client.go:171] duration metric: took 408.514209ms to LocalClient.Create
	I1007 05:36:30.431311   13938 start.go:128] duration metric: took 2.465997292s to createHost
	I1007 05:36:30.431408   13938 start.go:83] releasing machines lock for "calico-585000", held for 2.466485875s
	W1007 05:36:30.431791   13938 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:36:30.442433   13938 out.go:201] 
	W1007 05:36:30.446447   13938 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:36:30.446499   13938 out.go:270] * 
	* 
	W1007 05:36:30.448404   13938 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:36:30.461408   13938 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-585000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-585000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.843310583s)

                                                
                                                
-- stdout --
	* [custom-flannel-585000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-585000" primary control-plane node in "custom-flannel-585000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-585000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:36:33.070502   14055 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:36:33.070659   14055 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:36:33.070663   14055 out.go:358] Setting ErrFile to fd 2...
	I1007 05:36:33.070665   14055 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:36:33.070795   14055 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:36:33.071962   14055 out.go:352] Setting JSON to false
	I1007 05:36:33.089957   14055 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7564,"bootTime":1728297029,"procs":527,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:36:33.090029   14055 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:36:33.095352   14055 out.go:177] * [custom-flannel-585000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:36:33.103170   14055 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:36:33.103275   14055 notify.go:220] Checking for updates...
	I1007 05:36:33.110202   14055 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:36:33.113198   14055 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:36:33.116249   14055 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:36:33.119149   14055 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:36:33.122199   14055 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:36:33.125577   14055 config.go:182] Loaded profile config "multinode-062000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:36:33.125648   14055 config.go:182] Loaded profile config "stopped-upgrade-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:36:33.125693   14055 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:36:33.130192   14055 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:36:33.137186   14055 start.go:297] selected driver: qemu2
	I1007 05:36:33.137191   14055 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:36:33.137197   14055 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:36:33.139534   14055 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:36:33.143238   14055 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:36:33.146336   14055 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:36:33.146358   14055 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1007 05:36:33.146374   14055 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1007 05:36:33.146408   14055 start.go:340] cluster config:
	{Name:custom-flannel-585000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCli
entPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:36:33.150836   14055 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:36:33.159183   14055 out.go:177] * Starting "custom-flannel-585000" primary control-plane node in "custom-flannel-585000" cluster
	I1007 05:36:33.163128   14055 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:36:33.163145   14055 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:36:33.163153   14055 cache.go:56] Caching tarball of preloaded images
	I1007 05:36:33.163235   14055 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:36:33.163242   14055 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:36:33.163318   14055 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/custom-flannel-585000/config.json ...
	I1007 05:36:33.163329   14055 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/custom-flannel-585000/config.json: {Name:mkb2c48e5680b9200c139ed10cd8389b6157fd21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:36:33.163585   14055 start.go:360] acquireMachinesLock for custom-flannel-585000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:36:33.163633   14055 start.go:364] duration metric: took 41.625µs to acquireMachinesLock for "custom-flannel-585000"
	I1007 05:36:33.163647   14055 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:36:33.163674   14055 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:36:33.168327   14055 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:36:33.185157   14055 start.go:159] libmachine.API.Create for "custom-flannel-585000" (driver="qemu2")
	I1007 05:36:33.185186   14055 client.go:168] LocalClient.Create starting
	I1007 05:36:33.185293   14055 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:36:33.185332   14055 main.go:141] libmachine: Decoding PEM data...
	I1007 05:36:33.185343   14055 main.go:141] libmachine: Parsing certificate...
	I1007 05:36:33.185384   14055 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:36:33.185413   14055 main.go:141] libmachine: Decoding PEM data...
	I1007 05:36:33.185419   14055 main.go:141] libmachine: Parsing certificate...
	I1007 05:36:33.185807   14055 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:36:33.327019   14055 main.go:141] libmachine: Creating SSH key...
	I1007 05:36:33.430902   14055 main.go:141] libmachine: Creating Disk image...
	I1007 05:36:33.430910   14055 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:36:33.431098   14055 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/custom-flannel-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/custom-flannel-585000/disk.qcow2
	I1007 05:36:33.440871   14055 main.go:141] libmachine: STDOUT: 
	I1007 05:36:33.440891   14055 main.go:141] libmachine: STDERR: 
	I1007 05:36:33.440950   14055 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/custom-flannel-585000/disk.qcow2 +20000M
	I1007 05:36:33.449424   14055 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:36:33.449448   14055 main.go:141] libmachine: STDERR: 
	I1007 05:36:33.449461   14055 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/custom-flannel-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/custom-flannel-585000/disk.qcow2
	I1007 05:36:33.449467   14055 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:36:33.449478   14055 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:36:33.449505   14055 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/custom-flannel-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/custom-flannel-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/custom-flannel-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:92:a9:05:5d:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/custom-flannel-585000/disk.qcow2
	I1007 05:36:33.451329   14055 main.go:141] libmachine: STDOUT: 
	I1007 05:36:33.451340   14055 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:36:33.451360   14055 client.go:171] duration metric: took 266.173958ms to LocalClient.Create
	I1007 05:36:35.453603   14055 start.go:128] duration metric: took 2.289927625s to createHost
	I1007 05:36:35.453695   14055 start.go:83] releasing machines lock for "custom-flannel-585000", held for 2.290094542s
	W1007 05:36:35.453743   14055 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:36:35.468016   14055 out.go:177] * Deleting "custom-flannel-585000" in qemu2 ...
	W1007 05:36:35.491873   14055 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:36:35.491961   14055 start.go:729] Will try again in 5 seconds ...
	I1007 05:36:40.494079   14055 start.go:360] acquireMachinesLock for custom-flannel-585000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:36:40.494620   14055 start.go:364] duration metric: took 438.208µs to acquireMachinesLock for "custom-flannel-585000"
	I1007 05:36:40.494736   14055 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:36:40.494951   14055 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:36:40.504678   14055 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:36:40.551133   14055 start.go:159] libmachine.API.Create for "custom-flannel-585000" (driver="qemu2")
	I1007 05:36:40.551196   14055 client.go:168] LocalClient.Create starting
	I1007 05:36:40.551359   14055 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:36:40.551441   14055 main.go:141] libmachine: Decoding PEM data...
	I1007 05:36:40.551457   14055 main.go:141] libmachine: Parsing certificate...
	I1007 05:36:40.551530   14055 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:36:40.551589   14055 main.go:141] libmachine: Decoding PEM data...
	I1007 05:36:40.551604   14055 main.go:141] libmachine: Parsing certificate...
	I1007 05:36:40.552283   14055 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:36:40.729074   14055 main.go:141] libmachine: Creating SSH key...
	I1007 05:36:40.815623   14055 main.go:141] libmachine: Creating Disk image...
	I1007 05:36:40.815632   14055 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:36:40.815842   14055 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/custom-flannel-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/custom-flannel-585000/disk.qcow2
	I1007 05:36:40.826614   14055 main.go:141] libmachine: STDOUT: 
	I1007 05:36:40.826653   14055 main.go:141] libmachine: STDERR: 
	I1007 05:36:40.826727   14055 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/custom-flannel-585000/disk.qcow2 +20000M
	I1007 05:36:40.835842   14055 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:36:40.835861   14055 main.go:141] libmachine: STDERR: 
	I1007 05:36:40.835875   14055 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/custom-flannel-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/custom-flannel-585000/disk.qcow2
	I1007 05:36:40.835885   14055 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:36:40.835893   14055 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:36:40.835924   14055 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/custom-flannel-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/custom-flannel-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/custom-flannel-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:90:be:df:1e:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/custom-flannel-585000/disk.qcow2
	I1007 05:36:40.837831   14055 main.go:141] libmachine: STDOUT: 
	I1007 05:36:40.837846   14055 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:36:40.837862   14055 client.go:171] duration metric: took 286.665417ms to LocalClient.Create
	I1007 05:36:42.839951   14055 start.go:128] duration metric: took 2.344992167s to createHost
	I1007 05:36:42.839977   14055 start.go:83] releasing machines lock for "custom-flannel-585000", held for 2.345383708s
	W1007 05:36:42.840134   14055 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:36:42.854016   14055 out.go:201] 
	W1007 05:36:42.859036   14055 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:36:42.859045   14055 out.go:270] * 
	* 
	W1007 05:36:42.860035   14055 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:36:42.871903   14055 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-585000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-585000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.865062209s)

                                                
                                                
-- stdout --
	* [false-585000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-585000" primary control-plane node in "false-585000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-585000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:36:45.399935   14172 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:36:45.400097   14172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:36:45.400100   14172 out.go:358] Setting ErrFile to fd 2...
	I1007 05:36:45.400102   14172 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:36:45.400236   14172 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:36:45.401456   14172 out.go:352] Setting JSON to false
	I1007 05:36:45.420014   14172 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7576,"bootTime":1728297029,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:36:45.420082   14172 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:36:45.426524   14172 out.go:177] * [false-585000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:36:45.434589   14172 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:36:45.434670   14172 notify.go:220] Checking for updates...
	I1007 05:36:45.441434   14172 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:36:45.444474   14172 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:36:45.447516   14172 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:36:45.450396   14172 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:36:45.453492   14172 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:36:45.456862   14172 config.go:182] Loaded profile config "multinode-062000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:36:45.456945   14172 config.go:182] Loaded profile config "stopped-upgrade-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:36:45.457009   14172 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:36:45.461440   14172 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:36:45.468465   14172 start.go:297] selected driver: qemu2
	I1007 05:36:45.468470   14172 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:36:45.468475   14172 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:36:45.470905   14172 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:36:45.474453   14172 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:36:45.477531   14172 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:36:45.477546   14172 cni.go:84] Creating CNI manager for "false"
	I1007 05:36:45.477570   14172 start.go:340] cluster config:
	{Name:false-585000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet
_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:36:45.481904   14172 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:36:45.490450   14172 out.go:177] * Starting "false-585000" primary control-plane node in "false-585000" cluster
	I1007 05:36:45.494505   14172 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:36:45.494533   14172 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:36:45.494538   14172 cache.go:56] Caching tarball of preloaded images
	I1007 05:36:45.494636   14172 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:36:45.494641   14172 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:36:45.494715   14172 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/false-585000/config.json ...
	I1007 05:36:45.494728   14172 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/false-585000/config.json: {Name:mkc81b3b79884671331f0850bc0008804bbca7f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:36:45.495057   14172 start.go:360] acquireMachinesLock for false-585000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:36:45.495100   14172 start.go:364] duration metric: took 37.583µs to acquireMachinesLock for "false-585000"
	I1007 05:36:45.495112   14172 start.go:93] Provisioning new machine with config: &{Name:false-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:false-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:36:45.495145   14172 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:36:45.499446   14172 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:36:45.513954   14172 start.go:159] libmachine.API.Create for "false-585000" (driver="qemu2")
	I1007 05:36:45.513980   14172 client.go:168] LocalClient.Create starting
	I1007 05:36:45.514044   14172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:36:45.514081   14172 main.go:141] libmachine: Decoding PEM data...
	I1007 05:36:45.514091   14172 main.go:141] libmachine: Parsing certificate...
	I1007 05:36:45.514135   14172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:36:45.514165   14172 main.go:141] libmachine: Decoding PEM data...
	I1007 05:36:45.514177   14172 main.go:141] libmachine: Parsing certificate...
	I1007 05:36:45.514613   14172 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:36:45.730412   14172 main.go:141] libmachine: Creating SSH key...
	I1007 05:36:45.791026   14172 main.go:141] libmachine: Creating Disk image...
	I1007 05:36:45.791032   14172 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:36:45.791219   14172 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/false-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/false-585000/disk.qcow2
	I1007 05:36:45.801201   14172 main.go:141] libmachine: STDOUT: 
	I1007 05:36:45.801229   14172 main.go:141] libmachine: STDERR: 
	I1007 05:36:45.801298   14172 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/false-585000/disk.qcow2 +20000M
	I1007 05:36:45.809804   14172 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:36:45.809819   14172 main.go:141] libmachine: STDERR: 
	I1007 05:36:45.809841   14172 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/false-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/false-585000/disk.qcow2
	I1007 05:36:45.809846   14172 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:36:45.809858   14172 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:36:45.809896   14172 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/false-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/false-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/false-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:24:90:02:ef:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/false-585000/disk.qcow2
	I1007 05:36:45.811807   14172 main.go:141] libmachine: STDOUT: 
	I1007 05:36:45.811829   14172 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:36:45.811851   14172 client.go:171] duration metric: took 297.870667ms to LocalClient.Create
	I1007 05:36:47.814077   14172 start.go:128] duration metric: took 2.318911s to createHost
	I1007 05:36:47.814190   14172 start.go:83] releasing machines lock for "false-585000", held for 2.319124916s
	W1007 05:36:47.814245   14172 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:36:47.827165   14172 out.go:177] * Deleting "false-585000" in qemu2 ...
	W1007 05:36:47.848316   14172 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:36:47.848338   14172 start.go:729] Will try again in 5 seconds ...
	I1007 05:36:52.850646   14172 start.go:360] acquireMachinesLock for false-585000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:36:52.851272   14172 start.go:364] duration metric: took 495.916µs to acquireMachinesLock for "false-585000"
	I1007 05:36:52.851402   14172 start.go:93] Provisioning new machine with config: &{Name:false-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:false-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:36:52.851649   14172 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:36:52.862292   14172 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:36:52.906194   14172 start.go:159] libmachine.API.Create for "false-585000" (driver="qemu2")
	I1007 05:36:52.906245   14172 client.go:168] LocalClient.Create starting
	I1007 05:36:52.906396   14172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:36:52.906490   14172 main.go:141] libmachine: Decoding PEM data...
	I1007 05:36:52.906513   14172 main.go:141] libmachine: Parsing certificate...
	I1007 05:36:52.906589   14172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:36:52.906648   14172 main.go:141] libmachine: Decoding PEM data...
	I1007 05:36:52.906661   14172 main.go:141] libmachine: Parsing certificate...
	I1007 05:36:52.907345   14172 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:36:53.059764   14172 main.go:141] libmachine: Creating SSH key...
	I1007 05:36:53.170646   14172 main.go:141] libmachine: Creating Disk image...
	I1007 05:36:53.170653   14172 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:36:53.170849   14172 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/false-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/false-585000/disk.qcow2
	I1007 05:36:53.181357   14172 main.go:141] libmachine: STDOUT: 
	I1007 05:36:53.181381   14172 main.go:141] libmachine: STDERR: 
	I1007 05:36:53.181457   14172 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/false-585000/disk.qcow2 +20000M
	I1007 05:36:53.190339   14172 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:36:53.190357   14172 main.go:141] libmachine: STDERR: 
	I1007 05:36:53.190367   14172 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/false-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/false-585000/disk.qcow2
	I1007 05:36:53.190371   14172 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:36:53.190380   14172 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:36:53.190407   14172 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/false-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/false-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/false-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:3f:f9:88:1c:50 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/false-585000/disk.qcow2
	I1007 05:36:53.192356   14172 main.go:141] libmachine: STDOUT: 
	I1007 05:36:53.192372   14172 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:36:53.192386   14172 client.go:171] duration metric: took 286.139125ms to LocalClient.Create
	I1007 05:36:55.194490   14172 start.go:128] duration metric: took 2.342865708s to createHost
	I1007 05:36:55.194530   14172 start.go:83] releasing machines lock for "false-585000", held for 2.343285833s
	W1007 05:36:55.194641   14172 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:36:55.206918   14172 out.go:201] 
	W1007 05:36:55.210852   14172 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:36:55.210861   14172 out.go:270] * 
	* 
	W1007 05:36:55.211524   14172 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:36:55.222864   14172 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-585000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-585000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.756095875s)

                                                
                                                
-- stdout --
	* [enable-default-cni-585000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-585000" primary control-plane node in "enable-default-cni-585000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-585000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:36:57.566607   14282 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:36:57.566753   14282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:36:57.566759   14282 out.go:358] Setting ErrFile to fd 2...
	I1007 05:36:57.566762   14282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:36:57.566911   14282 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:36:57.568110   14282 out.go:352] Setting JSON to false
	I1007 05:36:57.585992   14282 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7588,"bootTime":1728297029,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:36:57.586060   14282 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:36:57.591178   14282 out.go:177] * [enable-default-cni-585000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:36:57.598020   14282 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:36:57.598054   14282 notify.go:220] Checking for updates...
	I1007 05:36:57.604995   14282 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:36:57.608044   14282 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:36:57.611014   14282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:36:57.612250   14282 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:36:57.614997   14282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:36:57.618407   14282 config.go:182] Loaded profile config "multinode-062000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:36:57.618474   14282 config.go:182] Loaded profile config "stopped-upgrade-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:36:57.618517   14282 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:36:57.622804   14282 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:36:57.630039   14282 start.go:297] selected driver: qemu2
	I1007 05:36:57.630044   14282 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:36:57.630050   14282 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:36:57.632547   14282 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:36:57.635028   14282 out.go:177] * Automatically selected the socket_vmnet network
	E1007 05:36:57.638084   14282 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1007 05:36:57.638099   14282 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:36:57.638125   14282 cni.go:84] Creating CNI manager for "bridge"
	I1007 05:36:57.638133   14282 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 05:36:57.638166   14282 start.go:340] cluster config:
	{Name:enable-default-cni-585000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt
/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:36:57.642650   14282 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:36:57.650990   14282 out.go:177] * Starting "enable-default-cni-585000" primary control-plane node in "enable-default-cni-585000" cluster
	I1007 05:36:57.655014   14282 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:36:57.655027   14282 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:36:57.655034   14282 cache.go:56] Caching tarball of preloaded images
	I1007 05:36:57.655104   14282 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:36:57.655110   14282 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:36:57.655170   14282 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/enable-default-cni-585000/config.json ...
	I1007 05:36:57.655181   14282 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/enable-default-cni-585000/config.json: {Name:mk42e30270683b38ccfd470e4229ac2f2f8b8432 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:36:57.655513   14282 start.go:360] acquireMachinesLock for enable-default-cni-585000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:36:57.655564   14282 start.go:364] duration metric: took 42.083µs to acquireMachinesLock for "enable-default-cni-585000"
	I1007 05:36:57.655577   14282 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:36:57.655605   14282 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:36:57.658954   14282 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:36:57.675376   14282 start.go:159] libmachine.API.Create for "enable-default-cni-585000" (driver="qemu2")
	I1007 05:36:57.675401   14282 client.go:168] LocalClient.Create starting
	I1007 05:36:57.675474   14282 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:36:57.675518   14282 main.go:141] libmachine: Decoding PEM data...
	I1007 05:36:57.675531   14282 main.go:141] libmachine: Parsing certificate...
	I1007 05:36:57.675581   14282 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:36:57.675612   14282 main.go:141] libmachine: Decoding PEM data...
	I1007 05:36:57.675620   14282 main.go:141] libmachine: Parsing certificate...
	I1007 05:36:57.676021   14282 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:36:57.816791   14282 main.go:141] libmachine: Creating SSH key...
	I1007 05:36:57.898541   14282 main.go:141] libmachine: Creating Disk image...
	I1007 05:36:57.898549   14282 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:36:57.898749   14282 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/enable-default-cni-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/enable-default-cni-585000/disk.qcow2
	I1007 05:36:57.908647   14282 main.go:141] libmachine: STDOUT: 
	I1007 05:36:57.908661   14282 main.go:141] libmachine: STDERR: 
	I1007 05:36:57.908716   14282 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/enable-default-cni-585000/disk.qcow2 +20000M
	I1007 05:36:57.917423   14282 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:36:57.917456   14282 main.go:141] libmachine: STDERR: 
	I1007 05:36:57.917472   14282 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/enable-default-cni-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/enable-default-cni-585000/disk.qcow2
	I1007 05:36:57.917484   14282 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:36:57.917497   14282 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:36:57.917533   14282 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/enable-default-cni-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/enable-default-cni-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/enable-default-cni-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:39:dc:8d:ab:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/enable-default-cni-585000/disk.qcow2
	I1007 05:36:57.919413   14282 main.go:141] libmachine: STDOUT: 
	I1007 05:36:57.919484   14282 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:36:57.919512   14282 client.go:171] duration metric: took 244.109625ms to LocalClient.Create
	I1007 05:36:59.921748   14282 start.go:128] duration metric: took 2.266151458s to createHost
	I1007 05:36:59.921855   14282 start.go:83] releasing machines lock for "enable-default-cni-585000", held for 2.266321959s
	W1007 05:36:59.921932   14282 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:36:59.936215   14282 out.go:177] * Deleting "enable-default-cni-585000" in qemu2 ...
	W1007 05:36:59.960897   14282 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:36:59.960933   14282 start.go:729] Will try again in 5 seconds ...
	I1007 05:37:04.963025   14282 start.go:360] acquireMachinesLock for enable-default-cni-585000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:37:04.963659   14282 start.go:364] duration metric: took 528.083µs to acquireMachinesLock for "enable-default-cni-585000"
	I1007 05:37:04.963734   14282 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:37:04.964015   14282 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:37:04.972721   14282 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:37:05.016089   14282 start.go:159] libmachine.API.Create for "enable-default-cni-585000" (driver="qemu2")
	I1007 05:37:05.016139   14282 client.go:168] LocalClient.Create starting
	I1007 05:37:05.016293   14282 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:37:05.016381   14282 main.go:141] libmachine: Decoding PEM data...
	I1007 05:37:05.016403   14282 main.go:141] libmachine: Parsing certificate...
	I1007 05:37:05.016473   14282 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:37:05.016534   14282 main.go:141] libmachine: Decoding PEM data...
	I1007 05:37:05.016549   14282 main.go:141] libmachine: Parsing certificate...
	I1007 05:37:05.017159   14282 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:37:05.172500   14282 main.go:141] libmachine: Creating SSH key...
	I1007 05:37:05.233107   14282 main.go:141] libmachine: Creating Disk image...
	I1007 05:37:05.233117   14282 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:37:05.233316   14282 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/enable-default-cni-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/enable-default-cni-585000/disk.qcow2
	I1007 05:37:05.243467   14282 main.go:141] libmachine: STDOUT: 
	I1007 05:37:05.243494   14282 main.go:141] libmachine: STDERR: 
	I1007 05:37:05.243564   14282 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/enable-default-cni-585000/disk.qcow2 +20000M
	I1007 05:37:05.252393   14282 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:37:05.252407   14282 main.go:141] libmachine: STDERR: 
	I1007 05:37:05.252423   14282 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/enable-default-cni-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/enable-default-cni-585000/disk.qcow2
	I1007 05:37:05.252431   14282 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:37:05.252439   14282 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:37:05.252467   14282 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/enable-default-cni-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/enable-default-cni-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/enable-default-cni-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:6a:51:62:bf:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/enable-default-cni-585000/disk.qcow2
	I1007 05:37:05.254321   14282 main.go:141] libmachine: STDOUT: 
	I1007 05:37:05.254335   14282 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:37:05.254348   14282 client.go:171] duration metric: took 238.209042ms to LocalClient.Create
	I1007 05:37:07.256451   14282 start.go:128] duration metric: took 2.292455292s to createHost
	I1007 05:37:07.256507   14282 start.go:83] releasing machines lock for "enable-default-cni-585000", held for 2.29286725s
	W1007 05:37:07.256733   14282 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:37:07.263197   14282 out.go:201] 
	W1007 05:37:07.267237   14282 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:37:07.267251   14282 out.go:270] * 
	* 
	W1007 05:37:07.268532   14282 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:37:07.279052   14282 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-585000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-585000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.884029833s)

                                                
                                                
-- stdout --
	* [flannel-585000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-585000" primary control-plane node in "flannel-585000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-585000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:37:09.661646   14391 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:37:09.661822   14391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:37:09.661825   14391 out.go:358] Setting ErrFile to fd 2...
	I1007 05:37:09.661828   14391 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:37:09.661966   14391 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:37:09.663216   14391 out.go:352] Setting JSON to false
	I1007 05:37:09.681882   14391 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7600,"bootTime":1728297029,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:37:09.681966   14391 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:37:09.687106   14391 out.go:177] * [flannel-585000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:37:09.695129   14391 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:37:09.695224   14391 notify.go:220] Checking for updates...
	I1007 05:37:09.702049   14391 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:37:09.705150   14391 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:37:09.708150   14391 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:37:09.711112   14391 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:37:09.714080   14391 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:37:09.717497   14391 config.go:182] Loaded profile config "multinode-062000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:37:09.717573   14391 config.go:182] Loaded profile config "stopped-upgrade-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:37:09.717622   14391 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:37:09.722037   14391 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:37:09.729096   14391 start.go:297] selected driver: qemu2
	I1007 05:37:09.729103   14391 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:37:09.729109   14391 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:37:09.731546   14391 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:37:09.734987   14391 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:37:09.738187   14391 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:37:09.738203   14391 cni.go:84] Creating CNI manager for "flannel"
	I1007 05:37:09.738207   14391 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1007 05:37:09.738234   14391 start.go:340] cluster config:
	{Name:flannel-585000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/soc
ket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:37:09.742733   14391 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:37:09.747164   14391 out.go:177] * Starting "flannel-585000" primary control-plane node in "flannel-585000" cluster
	I1007 05:37:09.751087   14391 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:37:09.751104   14391 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:37:09.751113   14391 cache.go:56] Caching tarball of preloaded images
	I1007 05:37:09.751186   14391 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:37:09.751191   14391 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:37:09.751256   14391 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/flannel-585000/config.json ...
	I1007 05:37:09.751267   14391 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/flannel-585000/config.json: {Name:mkb5b2f92d55864450b4a2f7d47b02ad9bcc70da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:37:09.751511   14391 start.go:360] acquireMachinesLock for flannel-585000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:37:09.751558   14391 start.go:364] duration metric: took 41.708µs to acquireMachinesLock for "flannel-585000"
	I1007 05:37:09.751574   14391 start.go:93] Provisioning new machine with config: &{Name:flannel-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:flannel-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:37:09.751601   14391 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:37:09.756112   14391 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:37:09.771313   14391 start.go:159] libmachine.API.Create for "flannel-585000" (driver="qemu2")
	I1007 05:37:09.771345   14391 client.go:168] LocalClient.Create starting
	I1007 05:37:09.771415   14391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:37:09.771451   14391 main.go:141] libmachine: Decoding PEM data...
	I1007 05:37:09.771465   14391 main.go:141] libmachine: Parsing certificate...
	I1007 05:37:09.771507   14391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:37:09.771537   14391 main.go:141] libmachine: Decoding PEM data...
	I1007 05:37:09.771547   14391 main.go:141] libmachine: Parsing certificate...
	I1007 05:37:09.771932   14391 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:37:09.913232   14391 main.go:141] libmachine: Creating SSH key...
	I1007 05:37:10.166101   14391 main.go:141] libmachine: Creating Disk image...
	I1007 05:37:10.166110   14391 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:37:10.166315   14391 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/flannel-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/flannel-585000/disk.qcow2
	I1007 05:37:10.177265   14391 main.go:141] libmachine: STDOUT: 
	I1007 05:37:10.177349   14391 main.go:141] libmachine: STDERR: 
	I1007 05:37:10.177411   14391 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/flannel-585000/disk.qcow2 +20000M
	I1007 05:37:10.186566   14391 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:37:10.186584   14391 main.go:141] libmachine: STDERR: 
	I1007 05:37:10.186605   14391 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/flannel-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/flannel-585000/disk.qcow2
	I1007 05:37:10.186609   14391 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:37:10.186623   14391 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:37:10.186658   14391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/flannel-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/flannel-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/flannel-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:f8:95:90:0f:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/flannel-585000/disk.qcow2
	I1007 05:37:10.188660   14391 main.go:141] libmachine: STDOUT: 
	I1007 05:37:10.188686   14391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:37:10.188706   14391 client.go:171] duration metric: took 417.362875ms to LocalClient.Create
	I1007 05:37:12.189459   14391 start.go:128] duration metric: took 2.437871917s to createHost
	I1007 05:37:12.189515   14391 start.go:83] releasing machines lock for "flannel-585000", held for 2.43799375s
	W1007 05:37:12.189540   14391 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:37:12.198844   14391 out.go:177] * Deleting "flannel-585000" in qemu2 ...
	W1007 05:37:12.214865   14391 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:37:12.214889   14391 start.go:729] Will try again in 5 seconds ...
	I1007 05:37:17.216984   14391 start.go:360] acquireMachinesLock for flannel-585000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:37:17.217285   14391 start.go:364] duration metric: took 224.875µs to acquireMachinesLock for "flannel-585000"
	I1007 05:37:17.217369   14391 start.go:93] Provisioning new machine with config: &{Name:flannel-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:flannel-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:37:17.217491   14391 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:37:17.228879   14391 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:37:17.253582   14391 start.go:159] libmachine.API.Create for "flannel-585000" (driver="qemu2")
	I1007 05:37:17.253617   14391 client.go:168] LocalClient.Create starting
	I1007 05:37:17.253704   14391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:37:17.253757   14391 main.go:141] libmachine: Decoding PEM data...
	I1007 05:37:17.253772   14391 main.go:141] libmachine: Parsing certificate...
	I1007 05:37:17.253816   14391 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:37:17.253852   14391 main.go:141] libmachine: Decoding PEM data...
	I1007 05:37:17.253862   14391 main.go:141] libmachine: Parsing certificate...
	I1007 05:37:17.254349   14391 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:37:17.397764   14391 main.go:141] libmachine: Creating SSH key...
	I1007 05:37:17.454802   14391 main.go:141] libmachine: Creating Disk image...
	I1007 05:37:17.454808   14391 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:37:17.454991   14391 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/flannel-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/flannel-585000/disk.qcow2
	I1007 05:37:17.465150   14391 main.go:141] libmachine: STDOUT: 
	I1007 05:37:17.465176   14391 main.go:141] libmachine: STDERR: 
	I1007 05:37:17.465246   14391 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/flannel-585000/disk.qcow2 +20000M
	I1007 05:37:17.473927   14391 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:37:17.473943   14391 main.go:141] libmachine: STDERR: 
	I1007 05:37:17.473955   14391 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/flannel-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/flannel-585000/disk.qcow2
	I1007 05:37:17.473960   14391 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:37:17.473970   14391 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:37:17.474007   14391 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/flannel-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/flannel-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/flannel-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:53:1b:1c:97:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/flannel-585000/disk.qcow2
	I1007 05:37:17.475850   14391 main.go:141] libmachine: STDOUT: 
	I1007 05:37:17.475865   14391 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:37:17.475877   14391 client.go:171] duration metric: took 222.259709ms to LocalClient.Create
	I1007 05:37:19.477933   14391 start.go:128] duration metric: took 2.260473042s to createHost
	I1007 05:37:19.477956   14391 start.go:83] releasing machines lock for "flannel-585000", held for 2.260702708s
	W1007 05:37:19.478090   14391 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:37:19.487476   14391 out.go:201] 
	W1007 05:37:19.490396   14391 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:37:19.490407   14391 out.go:270] * 
	* 
	W1007 05:37:19.491224   14391 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:37:19.501506   14391 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-585000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-585000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.782665459s)

                                                
                                                
-- stdout --
	* [bridge-585000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-585000" primary control-plane node in "bridge-585000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-585000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:37:22.067040   14515 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:37:22.067202   14515 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:37:22.067206   14515 out.go:358] Setting ErrFile to fd 2...
	I1007 05:37:22.067208   14515 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:37:22.067351   14515 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:37:22.068529   14515 out.go:352] Setting JSON to false
	I1007 05:37:22.087296   14515 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7613,"bootTime":1728297029,"procs":528,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:37:22.087365   14515 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:37:22.093418   14515 out.go:177] * [bridge-585000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:37:22.100508   14515 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:37:22.100554   14515 notify.go:220] Checking for updates...
	I1007 05:37:22.107336   14515 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:37:22.110446   14515 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:37:22.113420   14515 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:37:22.116410   14515 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:37:22.119469   14515 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:37:22.122806   14515 config.go:182] Loaded profile config "multinode-062000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:37:22.122877   14515 config.go:182] Loaded profile config "stopped-upgrade-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:37:22.122920   14515 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:37:22.126388   14515 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:37:22.133412   14515 start.go:297] selected driver: qemu2
	I1007 05:37:22.133418   14515 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:37:22.133424   14515 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:37:22.135729   14515 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:37:22.137109   14515 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:37:22.140529   14515 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:37:22.140547   14515 cni.go:84] Creating CNI manager for "bridge"
	I1007 05:37:22.140554   14515 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 05:37:22.140587   14515 start.go:340] cluster config:
	{Name:bridge-585000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:37:22.145028   14515 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:37:22.153357   14515 out.go:177] * Starting "bridge-585000" primary control-plane node in "bridge-585000" cluster
	I1007 05:37:22.157354   14515 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:37:22.157381   14515 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:37:22.157386   14515 cache.go:56] Caching tarball of preloaded images
	I1007 05:37:22.157471   14515 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:37:22.157476   14515 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:37:22.157545   14515 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/bridge-585000/config.json ...
	I1007 05:37:22.157559   14515 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/bridge-585000/config.json: {Name:mk43267823a706066dcd5000b0d25fc31494f1be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:37:22.157880   14515 start.go:360] acquireMachinesLock for bridge-585000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:37:22.157929   14515 start.go:364] duration metric: took 43.125µs to acquireMachinesLock for "bridge-585000"
	I1007 05:37:22.157941   14515 start.go:93] Provisioning new machine with config: &{Name:bridge-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:bridge-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:37:22.157980   14515 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:37:22.165444   14515 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:37:22.181435   14515 start.go:159] libmachine.API.Create for "bridge-585000" (driver="qemu2")
	I1007 05:37:22.181473   14515 client.go:168] LocalClient.Create starting
	I1007 05:37:22.181581   14515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:37:22.181623   14515 main.go:141] libmachine: Decoding PEM data...
	I1007 05:37:22.181641   14515 main.go:141] libmachine: Parsing certificate...
	I1007 05:37:22.181681   14515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:37:22.181711   14515 main.go:141] libmachine: Decoding PEM data...
	I1007 05:37:22.181720   14515 main.go:141] libmachine: Parsing certificate...
	I1007 05:37:22.182201   14515 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:37:22.324265   14515 main.go:141] libmachine: Creating SSH key...
	I1007 05:37:22.368986   14515 main.go:141] libmachine: Creating Disk image...
	I1007 05:37:22.368997   14515 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:37:22.369208   14515 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/bridge-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/bridge-585000/disk.qcow2
	I1007 05:37:22.379037   14515 main.go:141] libmachine: STDOUT: 
	I1007 05:37:22.379059   14515 main.go:141] libmachine: STDERR: 
	I1007 05:37:22.379127   14515 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/bridge-585000/disk.qcow2 +20000M
	I1007 05:37:22.388004   14515 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:37:22.388015   14515 main.go:141] libmachine: STDERR: 
	I1007 05:37:22.388039   14515 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/bridge-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/bridge-585000/disk.qcow2
	I1007 05:37:22.388044   14515 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:37:22.388055   14515 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:37:22.388096   14515 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/bridge-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/bridge-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/bridge-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:20:ba:ce:85:25 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/bridge-585000/disk.qcow2
	I1007 05:37:22.390085   14515 main.go:141] libmachine: STDOUT: 
	I1007 05:37:22.390096   14515 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:37:22.390115   14515 client.go:171] duration metric: took 208.638708ms to LocalClient.Create
	I1007 05:37:24.392286   14515 start.go:128] duration metric: took 2.234328459s to createHost
	I1007 05:37:24.392372   14515 start.go:83] releasing machines lock for "bridge-585000", held for 2.23447475s
	W1007 05:37:24.392425   14515 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:37:24.398751   14515 out.go:177] * Deleting "bridge-585000" in qemu2 ...
	W1007 05:37:24.420011   14515 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:37:24.420034   14515 start.go:729] Will try again in 5 seconds ...
	I1007 05:37:29.420506   14515 start.go:360] acquireMachinesLock for bridge-585000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:37:29.421228   14515 start.go:364] duration metric: took 615.833µs to acquireMachinesLock for "bridge-585000"
	I1007 05:37:29.421399   14515 start.go:93] Provisioning new machine with config: &{Name:bridge-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:bridge-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:37:29.421692   14515 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:37:29.434235   14515 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:37:29.480194   14515 start.go:159] libmachine.API.Create for "bridge-585000" (driver="qemu2")
	I1007 05:37:29.480264   14515 client.go:168] LocalClient.Create starting
	I1007 05:37:29.480407   14515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:37:29.480493   14515 main.go:141] libmachine: Decoding PEM data...
	I1007 05:37:29.480512   14515 main.go:141] libmachine: Parsing certificate...
	I1007 05:37:29.480577   14515 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:37:29.480634   14515 main.go:141] libmachine: Decoding PEM data...
	I1007 05:37:29.480646   14515 main.go:141] libmachine: Parsing certificate...
	I1007 05:37:29.481342   14515 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:37:29.634711   14515 main.go:141] libmachine: Creating SSH key...
	I1007 05:37:29.750511   14515 main.go:141] libmachine: Creating Disk image...
	I1007 05:37:29.750521   14515 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:37:29.750727   14515 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/bridge-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/bridge-585000/disk.qcow2
	I1007 05:37:29.760658   14515 main.go:141] libmachine: STDOUT: 
	I1007 05:37:29.760683   14515 main.go:141] libmachine: STDERR: 
	I1007 05:37:29.760740   14515 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/bridge-585000/disk.qcow2 +20000M
	I1007 05:37:29.769420   14515 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:37:29.769432   14515 main.go:141] libmachine: STDERR: 
	I1007 05:37:29.769445   14515 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/bridge-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/bridge-585000/disk.qcow2
	I1007 05:37:29.769449   14515 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:37:29.769463   14515 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:37:29.769495   14515 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/bridge-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/bridge-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/bridge-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:d8:e7:36:c8:c7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/bridge-585000/disk.qcow2
	I1007 05:37:29.771290   14515 main.go:141] libmachine: STDOUT: 
	I1007 05:37:29.771307   14515 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:37:29.771320   14515 client.go:171] duration metric: took 291.054708ms to LocalClient.Create
	I1007 05:37:31.773490   14515 start.go:128] duration metric: took 2.351766417s to createHost
	I1007 05:37:31.773588   14515 start.go:83] releasing machines lock for "bridge-585000", held for 2.352363542s
	W1007 05:37:31.773979   14515 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:37:31.786715   14515 out.go:201] 
	W1007 05:37:31.790847   14515 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:37:31.790871   14515 out.go:270] * 
	* 
	W1007 05:37:31.793239   14515 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:37:31.803761   14515 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-585000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-585000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.942191083s)

                                                
                                                
-- stdout --
	* [kubenet-585000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-585000" primary control-plane node in "kubenet-585000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-585000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:37:34.207995   14625 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:37:34.208169   14625 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:37:34.208173   14625 out.go:358] Setting ErrFile to fd 2...
	I1007 05:37:34.208175   14625 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:37:34.208303   14625 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:37:34.209451   14625 out.go:352] Setting JSON to false
	I1007 05:37:34.227359   14625 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7625,"bootTime":1728297029,"procs":527,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:37:34.227429   14625 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:37:34.232320   14625 out.go:177] * [kubenet-585000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:37:34.240302   14625 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:37:34.240367   14625 notify.go:220] Checking for updates...
	I1007 05:37:34.247207   14625 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:37:34.250277   14625 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:37:34.253156   14625 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:37:34.256237   14625 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:37:34.259215   14625 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:37:34.263043   14625 config.go:182] Loaded profile config "multinode-062000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:37:34.263112   14625 config.go:182] Loaded profile config "stopped-upgrade-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:37:34.263166   14625 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:37:34.267146   14625 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:37:34.281889   14625 start.go:297] selected driver: qemu2
	I1007 05:37:34.281896   14625 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:37:34.281903   14625 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:37:34.284319   14625 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:37:34.289263   14625 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:37:34.292256   14625 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:37:34.292277   14625 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1007 05:37:34.292317   14625 start.go:340] cluster config:
	{Name:kubenet-585000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:37:34.296925   14625 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:37:34.301234   14625 out.go:177] * Starting "kubenet-585000" primary control-plane node in "kubenet-585000" cluster
	I1007 05:37:34.308225   14625 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:37:34.308244   14625 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:37:34.308252   14625 cache.go:56] Caching tarball of preloaded images
	I1007 05:37:34.308327   14625 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:37:34.308333   14625 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:37:34.308390   14625 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/kubenet-585000/config.json ...
	I1007 05:37:34.308400   14625 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/kubenet-585000/config.json: {Name:mkc1fc8810c644dea85e234c32269048290385e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:37:34.308759   14625 start.go:360] acquireMachinesLock for kubenet-585000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:37:34.308806   14625 start.go:364] duration metric: took 41.375µs to acquireMachinesLock for "kubenet-585000"
	I1007 05:37:34.308820   14625 start.go:93] Provisioning new machine with config: &{Name:kubenet-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:kubenet-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:37:34.308866   14625 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:37:34.312179   14625 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:37:34.328040   14625 start.go:159] libmachine.API.Create for "kubenet-585000" (driver="qemu2")
	I1007 05:37:34.328064   14625 client.go:168] LocalClient.Create starting
	I1007 05:37:34.328127   14625 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:37:34.328167   14625 main.go:141] libmachine: Decoding PEM data...
	I1007 05:37:34.328180   14625 main.go:141] libmachine: Parsing certificate...
	I1007 05:37:34.328220   14625 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:37:34.328249   14625 main.go:141] libmachine: Decoding PEM data...
	I1007 05:37:34.328256   14625 main.go:141] libmachine: Parsing certificate...
	I1007 05:37:34.328666   14625 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:37:34.469898   14625 main.go:141] libmachine: Creating SSH key...
	I1007 05:37:34.673944   14625 main.go:141] libmachine: Creating Disk image...
	I1007 05:37:34.673962   14625 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:37:34.674191   14625 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubenet-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubenet-585000/disk.qcow2
	I1007 05:37:34.684599   14625 main.go:141] libmachine: STDOUT: 
	I1007 05:37:34.684615   14625 main.go:141] libmachine: STDERR: 
	I1007 05:37:34.684686   14625 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubenet-585000/disk.qcow2 +20000M
	I1007 05:37:34.693194   14625 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:37:34.693210   14625 main.go:141] libmachine: STDERR: 
	I1007 05:37:34.693226   14625 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubenet-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubenet-585000/disk.qcow2
	I1007 05:37:34.693233   14625 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:37:34.693245   14625 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:37:34.693277   14625 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubenet-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubenet-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubenet-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:39:50:b0:13:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubenet-585000/disk.qcow2
	I1007 05:37:34.695096   14625 main.go:141] libmachine: STDOUT: 
	I1007 05:37:34.695118   14625 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:37:34.695139   14625 client.go:171] duration metric: took 367.074625ms to LocalClient.Create
	I1007 05:37:36.697441   14625 start.go:128] duration metric: took 2.3885765s to createHost
	I1007 05:37:36.697535   14625 start.go:83] releasing machines lock for "kubenet-585000", held for 2.388762s
	W1007 05:37:36.697580   14625 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:37:36.704834   14625 out.go:177] * Deleting "kubenet-585000" in qemu2 ...
	W1007 05:37:36.732237   14625 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:37:36.732269   14625 start.go:729] Will try again in 5 seconds ...
	I1007 05:37:41.734443   14625 start.go:360] acquireMachinesLock for kubenet-585000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:37:41.735033   14625 start.go:364] duration metric: took 481.417µs to acquireMachinesLock for "kubenet-585000"
	I1007 05:37:41.735105   14625 start.go:93] Provisioning new machine with config: &{Name:kubenet-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:kubenet-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:37:41.735446   14625 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:37:41.745122   14625 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1007 05:37:41.794615   14625 start.go:159] libmachine.API.Create for "kubenet-585000" (driver="qemu2")
	I1007 05:37:41.794664   14625 client.go:168] LocalClient.Create starting
	I1007 05:37:41.794820   14625 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:37:41.794903   14625 main.go:141] libmachine: Decoding PEM data...
	I1007 05:37:41.794927   14625 main.go:141] libmachine: Parsing certificate...
	I1007 05:37:41.794999   14625 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:37:41.795060   14625 main.go:141] libmachine: Decoding PEM data...
	I1007 05:37:41.795075   14625 main.go:141] libmachine: Parsing certificate...
	I1007 05:37:41.795644   14625 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:37:41.947846   14625 main.go:141] libmachine: Creating SSH key...
	I1007 05:37:42.055600   14625 main.go:141] libmachine: Creating Disk image...
	I1007 05:37:42.055608   14625 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:37:42.055815   14625 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubenet-585000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubenet-585000/disk.qcow2
	I1007 05:37:42.066121   14625 main.go:141] libmachine: STDOUT: 
	I1007 05:37:42.066141   14625 main.go:141] libmachine: STDERR: 
	I1007 05:37:42.066190   14625 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubenet-585000/disk.qcow2 +20000M
	I1007 05:37:42.075169   14625 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:37:42.075185   14625 main.go:141] libmachine: STDERR: 
	I1007 05:37:42.075202   14625 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubenet-585000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubenet-585000/disk.qcow2
	I1007 05:37:42.075208   14625 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:37:42.075220   14625 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:37:42.075250   14625 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubenet-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubenet-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubenet-585000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:91:f2:89:94:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/kubenet-585000/disk.qcow2
	I1007 05:37:42.077193   14625 main.go:141] libmachine: STDOUT: 
	I1007 05:37:42.077207   14625 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:37:42.077220   14625 client.go:171] duration metric: took 282.554792ms to LocalClient.Create
	I1007 05:37:44.078552   14625 start.go:128] duration metric: took 2.343126417s to createHost
	I1007 05:37:44.078641   14625 start.go:83] releasing machines lock for "kubenet-585000", held for 2.343613041s
	W1007 05:37:44.078875   14625 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-585000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:37:44.087384   14625 out.go:201] 
	W1007 05:37:44.091481   14625 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:37:44.091502   14625 out.go:270] * 
	* 
	W1007 05:37:44.092840   14625 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:37:44.103386   14625 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-055000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-055000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.851185041s)

                                                
                                                
-- stdout --
	* [old-k8s-version-055000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-055000" primary control-plane node in "old-k8s-version-055000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-055000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:37:46.492311   14740 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:37:46.492468   14740 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:37:46.492472   14740 out.go:358] Setting ErrFile to fd 2...
	I1007 05:37:46.492474   14740 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:37:46.492614   14740 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:37:46.493841   14740 out.go:352] Setting JSON to false
	I1007 05:37:46.512026   14740 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7637,"bootTime":1728297029,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:37:46.512094   14740 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:37:46.518074   14740 out.go:177] * [old-k8s-version-055000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:37:46.525056   14740 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:37:46.525205   14740 notify.go:220] Checking for updates...
	I1007 05:37:46.532118   14740 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:37:46.534928   14740 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:37:46.538024   14740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:37:46.541062   14740 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:37:46.542371   14740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:37:46.545476   14740 config.go:182] Loaded profile config "multinode-062000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:37:46.545552   14740 config.go:182] Loaded profile config "stopped-upgrade-431000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1007 05:37:46.545606   14740 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:37:46.550033   14740 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:37:46.555014   14740 start.go:297] selected driver: qemu2
	I1007 05:37:46.555020   14740 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:37:46.555025   14740 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:37:46.557440   14740 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:37:46.559992   14740 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:37:46.563061   14740 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:37:46.563074   14740 cni.go:84] Creating CNI manager for ""
	I1007 05:37:46.563094   14740 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1007 05:37:46.563146   14740 start.go:340] cluster config:
	{Name:old-k8s-version-055000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin
/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:37:46.567443   14740 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:37:46.576058   14740 out.go:177] * Starting "old-k8s-version-055000" primary control-plane node in "old-k8s-version-055000" cluster
	I1007 05:37:46.579975   14740 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1007 05:37:46.579995   14740 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1007 05:37:46.579998   14740 cache.go:56] Caching tarball of preloaded images
	I1007 05:37:46.580067   14740 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:37:46.580072   14740 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1007 05:37:46.580124   14740 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/old-k8s-version-055000/config.json ...
	I1007 05:37:46.580135   14740 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/old-k8s-version-055000/config.json: {Name:mk1bc5dbcb03446899d8218b4f1eff901cca34e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:37:46.580403   14740 start.go:360] acquireMachinesLock for old-k8s-version-055000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:37:46.580447   14740 start.go:364] duration metric: took 38.416µs to acquireMachinesLock for "old-k8s-version-055000"
	I1007 05:37:46.580459   14740 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:37:46.580499   14740 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:37:46.584039   14740 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:37:46.598812   14740 start.go:159] libmachine.API.Create for "old-k8s-version-055000" (driver="qemu2")
	I1007 05:37:46.598835   14740 client.go:168] LocalClient.Create starting
	I1007 05:37:46.598903   14740 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:37:46.598946   14740 main.go:141] libmachine: Decoding PEM data...
	I1007 05:37:46.598959   14740 main.go:141] libmachine: Parsing certificate...
	I1007 05:37:46.599002   14740 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:37:46.599031   14740 main.go:141] libmachine: Decoding PEM data...
	I1007 05:37:46.599037   14740 main.go:141] libmachine: Parsing certificate...
	I1007 05:37:46.599409   14740 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:37:46.743514   14740 main.go:141] libmachine: Creating SSH key...
	I1007 05:37:46.825991   14740 main.go:141] libmachine: Creating Disk image...
	I1007 05:37:46.825999   14740 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:37:46.826204   14740 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/disk.qcow2
	I1007 05:37:46.836021   14740 main.go:141] libmachine: STDOUT: 
	I1007 05:37:46.836040   14740 main.go:141] libmachine: STDERR: 
	I1007 05:37:46.836115   14740 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/disk.qcow2 +20000M
	I1007 05:37:46.844499   14740 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:37:46.844514   14740 main.go:141] libmachine: STDERR: 
	I1007 05:37:46.844527   14740 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/disk.qcow2
	I1007 05:37:46.844532   14740 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:37:46.844546   14740 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:37:46.844576   14740 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:c1:eb:b2:c0:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/disk.qcow2
	I1007 05:37:46.846469   14740 main.go:141] libmachine: STDOUT: 
	I1007 05:37:46.846483   14740 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:37:46.846506   14740 client.go:171] duration metric: took 247.667167ms to LocalClient.Create
	I1007 05:37:48.848742   14740 start.go:128] duration metric: took 2.268223542s to createHost
	I1007 05:37:48.848859   14740 start.go:83] releasing machines lock for "old-k8s-version-055000", held for 2.268445125s
	W1007 05:37:48.848913   14740 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:37:48.862010   14740 out.go:177] * Deleting "old-k8s-version-055000" in qemu2 ...
	W1007 05:37:48.885642   14740 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:37:48.885671   14740 start.go:729] Will try again in 5 seconds ...
	I1007 05:37:53.887734   14740 start.go:360] acquireMachinesLock for old-k8s-version-055000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:37:53.888349   14740 start.go:364] duration metric: took 524.5µs to acquireMachinesLock for "old-k8s-version-055000"
	I1007 05:37:53.888508   14740 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Moun
tOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:37:53.888829   14740 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:37:53.898330   14740 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:37:53.945978   14740 start.go:159] libmachine.API.Create for "old-k8s-version-055000" (driver="qemu2")
	I1007 05:37:53.946028   14740 client.go:168] LocalClient.Create starting
	I1007 05:37:53.946196   14740 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:37:53.946270   14740 main.go:141] libmachine: Decoding PEM data...
	I1007 05:37:53.946286   14740 main.go:141] libmachine: Parsing certificate...
	I1007 05:37:53.946352   14740 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:37:53.946410   14740 main.go:141] libmachine: Decoding PEM data...
	I1007 05:37:53.946422   14740 main.go:141] libmachine: Parsing certificate...
	I1007 05:37:53.946975   14740 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:37:54.104142   14740 main.go:141] libmachine: Creating SSH key...
	I1007 05:37:54.247882   14740 main.go:141] libmachine: Creating Disk image...
	I1007 05:37:54.247891   14740 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:37:54.248114   14740 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/disk.qcow2
	I1007 05:37:54.258295   14740 main.go:141] libmachine: STDOUT: 
	I1007 05:37:54.258311   14740 main.go:141] libmachine: STDERR: 
	I1007 05:37:54.258369   14740 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/disk.qcow2 +20000M
	I1007 05:37:54.267158   14740 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:37:54.267175   14740 main.go:141] libmachine: STDERR: 
	I1007 05:37:54.267186   14740 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/disk.qcow2
	I1007 05:37:54.267191   14740 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:37:54.267200   14740 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:37:54.267233   14740 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:17:a2:2b:75:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/disk.qcow2
	I1007 05:37:54.269130   14740 main.go:141] libmachine: STDOUT: 
	I1007 05:37:54.269149   14740 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:37:54.269161   14740 client.go:171] duration metric: took 323.132834ms to LocalClient.Create
	I1007 05:37:56.271332   14740 start.go:128] duration metric: took 2.382504375s to createHost
	I1007 05:37:56.271426   14740 start.go:83] releasing machines lock for "old-k8s-version-055000", held for 2.383097416s
	W1007 05:37:56.271925   14740 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-055000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-055000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:37:56.279518   14740 out.go:201] 
	W1007 05:37:56.284539   14740 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:37:56.284599   14740 out.go:270] * 
	* 
	W1007 05:37:56.287480   14740 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:37:56.296501   14740 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-055000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-055000 -n old-k8s-version-055000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-055000 -n old-k8s-version-055000: exit status 7 (73.132959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-055000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-544000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-544000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.868043292s)

                                                
                                                
-- stdout --
	* [no-preload-544000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-544000" primary control-plane node in "no-preload-544000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-544000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:37:51.163512   14754 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:37:51.163670   14754 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:37:51.163673   14754 out.go:358] Setting ErrFile to fd 2...
	I1007 05:37:51.163675   14754 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:37:51.163820   14754 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:37:51.164965   14754 out.go:352] Setting JSON to false
	I1007 05:37:51.182557   14754 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7642,"bootTime":1728297029,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:37:51.182626   14754 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:37:51.187402   14754 out.go:177] * [no-preload-544000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:37:51.192455   14754 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:37:51.192494   14754 notify.go:220] Checking for updates...
	I1007 05:37:51.199259   14754 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:37:51.202367   14754 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:37:51.205386   14754 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:37:51.208241   14754 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:37:51.211360   14754 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:37:51.214755   14754 config.go:182] Loaded profile config "multinode-062000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:37:51.214837   14754 config.go:182] Loaded profile config "old-k8s-version-055000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1007 05:37:51.214894   14754 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:37:51.218283   14754 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:37:51.225394   14754 start.go:297] selected driver: qemu2
	I1007 05:37:51.225400   14754 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:37:51.225407   14754 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:37:51.227935   14754 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:37:51.229408   14754 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:37:51.232371   14754 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:37:51.232390   14754 cni.go:84] Creating CNI manager for ""
	I1007 05:37:51.232419   14754 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:37:51.232425   14754 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 05:37:51.232449   14754 start.go:340] cluster config:
	{Name:no-preload-544000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-544000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:37:51.237100   14754 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:37:51.245243   14754 out.go:177] * Starting "no-preload-544000" primary control-plane node in "no-preload-544000" cluster
	I1007 05:37:51.249377   14754 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:37:51.249490   14754 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/no-preload-544000/config.json ...
	I1007 05:37:51.249510   14754 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/no-preload-544000/config.json: {Name:mk2ef7903cc6b2067aa2701fe6422e9adad570f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:37:51.249521   14754 cache.go:107] acquiring lock: {Name:mk8679fc1b2ce53e2d9ce546115b09608e3115c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:37:51.249537   14754 cache.go:107] acquiring lock: {Name:mk2e19465d4b6204af4b367ea99f4b3b3f001c81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:37:51.249580   14754 cache.go:107] acquiring lock: {Name:mkb35cfc8387aa09a1c063ea55a86a1600e6316e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:37:51.249597   14754 cache.go:107] acquiring lock: {Name:mk79b2b276cd9c7fd0ff7bc76cca56bea2465974 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:37:51.249525   14754 cache.go:107] acquiring lock: {Name:mk8efece51cdcb9f88d49f66f9abcf441e534f05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:37:51.249685   14754 cache.go:107] acquiring lock: {Name:mk8acdf81d57a0bf8371d13cf7219384ff44afa1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:37:51.249776   14754 cache.go:107] acquiring lock: {Name:mk2094bce40b95b6445a7afaf790031048678998 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:37:51.249961   14754 cache.go:115] /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1007 05:37:51.249978   14754 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 453.375µs
	I1007 05:37:51.249991   14754 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1007 05:37:51.250006   14754 cache.go:107] acquiring lock: {Name:mkea253b2dc7005337c78873fd6da4ea4c961676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:37:51.250106   14754 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1007 05:37:51.250235   14754 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1007 05:37:51.250282   14754 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1007 05:37:51.250292   14754 start.go:360] acquireMachinesLock for no-preload-544000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:37:51.250322   14754 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1007 05:37:51.250338   14754 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1007 05:37:51.250380   14754 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1007 05:37:51.250327   14754 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1007 05:37:51.250428   14754 start.go:364] duration metric: took 127.291µs to acquireMachinesLock for "no-preload-544000"
	I1007 05:37:51.250444   14754 start.go:93] Provisioning new machine with config: &{Name:no-preload-544000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:no-preload-544000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:37:51.250500   14754 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:37:51.259338   14754 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:37:51.263101   14754 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1007 05:37:51.263261   14754 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1007 05:37:51.263876   14754 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1007 05:37:51.263873   14754 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1007 05:37:51.264060   14754 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1007 05:37:51.266241   14754 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1007 05:37:51.266271   14754 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1007 05:37:51.277990   14754 start.go:159] libmachine.API.Create for "no-preload-544000" (driver="qemu2")
	I1007 05:37:51.278010   14754 client.go:168] LocalClient.Create starting
	I1007 05:37:51.278098   14754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:37:51.278136   14754 main.go:141] libmachine: Decoding PEM data...
	I1007 05:37:51.278145   14754 main.go:141] libmachine: Parsing certificate...
	I1007 05:37:51.278208   14754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:37:51.278239   14754 main.go:141] libmachine: Decoding PEM data...
	I1007 05:37:51.278246   14754 main.go:141] libmachine: Parsing certificate...
	I1007 05:37:51.278619   14754 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:37:51.425153   14754 main.go:141] libmachine: Creating SSH key...
	I1007 05:37:51.488066   14754 main.go:141] libmachine: Creating Disk image...
	I1007 05:37:51.488083   14754 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:37:51.488283   14754 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/disk.qcow2
	I1007 05:37:51.591186   14754 main.go:141] libmachine: STDOUT: 
	I1007 05:37:51.591205   14754 main.go:141] libmachine: STDERR: 
	I1007 05:37:51.591264   14754 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/disk.qcow2 +20000M
	I1007 05:37:51.600284   14754 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:37:51.600301   14754 main.go:141] libmachine: STDERR: 
	I1007 05:37:51.600319   14754 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/disk.qcow2
	I1007 05:37:51.600324   14754 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:37:51.600341   14754 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:37:51.600370   14754 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:0f:2a:1f:8d:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/disk.qcow2
	I1007 05:37:51.602597   14754 main.go:141] libmachine: STDOUT: 
	I1007 05:37:51.602616   14754 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:37:51.602637   14754 client.go:171] duration metric: took 324.625916ms to LocalClient.Create
	I1007 05:37:51.702822   14754 cache.go:162] opening:  /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1007 05:37:51.742003   14754 cache.go:162] opening:  /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I1007 05:37:51.753368   14754 cache.go:162] opening:  /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1007 05:37:51.799191   14754 cache.go:162] opening:  /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I1007 05:37:51.890687   14754 cache.go:162] opening:  /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I1007 05:37:51.928826   14754 cache.go:162] opening:  /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I1007 05:37:52.014491   14754 cache.go:162] opening:  /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I1007 05:37:52.025216   14754 cache.go:157] /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1007 05:37:52.025247   14754 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 775.680333ms
	I1007 05:37:52.025269   14754 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1007 05:37:53.603088   14754 start.go:128] duration metric: took 2.352577833s to createHost
	I1007 05:37:53.603157   14754 start.go:83] releasing machines lock for "no-preload-544000", held for 2.352762125s
	W1007 05:37:53.603202   14754 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:37:53.615388   14754 out.go:177] * Deleting "no-preload-544000" in qemu2 ...
	W1007 05:37:53.640905   14754 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:37:53.640939   14754 start.go:729] Will try again in 5 seconds ...
	I1007 05:37:55.223640   14754 cache.go:157] /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1007 05:37:55.223710   14754 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.974113667s
	I1007 05:37:55.223743   14754 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1007 05:37:55.678740   14754 cache.go:157] /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1007 05:37:55.678790   14754 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 4.429117917s
	I1007 05:37:55.678819   14754 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1007 05:37:55.772239   14754 cache.go:157] /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1007 05:37:55.772303   14754 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 4.522874667s
	I1007 05:37:55.772333   14754 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1007 05:37:55.930980   14754 cache.go:157] /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1007 05:37:55.931030   14754 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 4.68159575s
	I1007 05:37:55.931074   14754 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1007 05:37:56.818193   14754 cache.go:157] /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1007 05:37:56.818220   14754 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 5.568316125s
	I1007 05:37:56.818260   14754 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1007 05:37:58.641042   14754 start.go:360] acquireMachinesLock for no-preload-544000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:37:58.641423   14754 start.go:364] duration metric: took 306.292µs to acquireMachinesLock for "no-preload-544000"
	I1007 05:37:58.641512   14754 start.go:93] Provisioning new machine with config: &{Name:no-preload-544000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:no-preload-544000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:37:58.641840   14754 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:37:58.650392   14754 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:37:58.697623   14754 start.go:159] libmachine.API.Create for "no-preload-544000" (driver="qemu2")
	I1007 05:37:58.697666   14754 client.go:168] LocalClient.Create starting
	I1007 05:37:58.697795   14754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:37:58.697865   14754 main.go:141] libmachine: Decoding PEM data...
	I1007 05:37:58.697884   14754 main.go:141] libmachine: Parsing certificate...
	I1007 05:37:58.697957   14754 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:37:58.697992   14754 main.go:141] libmachine: Decoding PEM data...
	I1007 05:37:58.698010   14754 main.go:141] libmachine: Parsing certificate...
	I1007 05:37:58.698596   14754 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:37:58.850868   14754 main.go:141] libmachine: Creating SSH key...
	I1007 05:37:58.918209   14754 main.go:141] libmachine: Creating Disk image...
	I1007 05:37:58.918215   14754 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:37:58.918398   14754 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/disk.qcow2
	I1007 05:37:58.928595   14754 main.go:141] libmachine: STDOUT: 
	I1007 05:37:58.928614   14754 main.go:141] libmachine: STDERR: 
	I1007 05:37:58.928672   14754 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/disk.qcow2 +20000M
	I1007 05:37:58.937232   14754 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:37:58.937247   14754 main.go:141] libmachine: STDERR: 
	I1007 05:37:58.937260   14754 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/disk.qcow2
	I1007 05:37:58.937265   14754 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:37:58.937277   14754 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:37:58.937315   14754 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:97:81:4b:81:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/disk.qcow2
	I1007 05:37:58.939335   14754 main.go:141] libmachine: STDOUT: 
	I1007 05:37:58.939395   14754 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:37:58.939416   14754 client.go:171] duration metric: took 241.749458ms to LocalClient.Create
	I1007 05:38:00.873121   14754 cache.go:157] /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1007 05:38:00.873189   14754 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 9.623779083s
	I1007 05:38:00.873214   14754 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1007 05:38:00.873279   14754 cache.go:87] Successfully saved all images to host disk.
	I1007 05:38:00.941592   14754 start.go:128] duration metric: took 2.299772625s to createHost
	I1007 05:38:00.941642   14754 start.go:83] releasing machines lock for "no-preload-544000", held for 2.30021475s
	W1007 05:38:00.941910   14754 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-544000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-544000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:38:00.954411   14754 out.go:201] 
	W1007 05:38:00.961360   14754 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:38:00.961385   14754 out.go:270] * 
	* 
	W1007 05:38:00.964183   14754 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:38:00.978360   14754 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-544000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-544000 -n no-preload-544000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-544000 -n no-preload-544000: exit status 7 (68.937125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-544000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-055000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-055000 create -f testdata/busybox.yaml: exit status 1 (29.066583ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-055000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-055000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-055000 -n old-k8s-version-055000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-055000 -n old-k8s-version-055000: exit status 7 (33.161709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-055000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-055000 -n old-k8s-version-055000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-055000 -n old-k8s-version-055000: exit status 7 (32.86625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-055000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-055000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-055000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-055000 describe deploy/metrics-server -n kube-system: exit status 1 (27.185041ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-055000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-055000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-055000 -n old-k8s-version-055000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-055000 -n old-k8s-version-055000: exit status 7 (34.070833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-055000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-055000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-055000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.722180667s)

                                                
                                                
-- stdout --
	* [old-k8s-version-055000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-055000" primary control-plane node in "old-k8s-version-055000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-055000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-055000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:38:00.347595   14834 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:38:00.347767   14834 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:00.347769   14834 out.go:358] Setting ErrFile to fd 2...
	I1007 05:38:00.347772   14834 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:00.347911   14834 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:38:00.349060   14834 out.go:352] Setting JSON to false
	I1007 05:38:00.366793   14834 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7651,"bootTime":1728297029,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:38:00.366857   14834 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:38:00.371760   14834 out.go:177] * [old-k8s-version-055000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:38:00.379771   14834 notify.go:220] Checking for updates...
	I1007 05:38:00.382634   14834 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:38:00.390719   14834 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:38:00.398675   14834 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:38:00.406689   14834 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:38:00.410702   14834 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:38:00.417703   14834 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:38:00.422000   14834 config.go:182] Loaded profile config "old-k8s-version-055000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1007 05:38:00.426769   14834 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1007 05:38:00.430760   14834 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:38:00.434741   14834 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 05:38:00.441709   14834 start.go:297] selected driver: qemu2
	I1007 05:38:00.441715   14834 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:38:00.441764   14834 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:38:00.444487   14834 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:38:00.444519   14834 cni.go:84] Creating CNI manager for ""
	I1007 05:38:00.444547   14834 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1007 05:38:00.444573   14834 start.go:340] cluster config:
	{Name:old-k8s-version-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-055000 Namespace:defaul
t APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:38:00.449497   14834 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:38:00.457705   14834 out.go:177] * Starting "old-k8s-version-055000" primary control-plane node in "old-k8s-version-055000" cluster
	I1007 05:38:00.461765   14834 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1007 05:38:00.461783   14834 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1007 05:38:00.461792   14834 cache.go:56] Caching tarball of preloaded images
	I1007 05:38:00.461862   14834 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:38:00.461868   14834 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1007 05:38:00.461933   14834 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/old-k8s-version-055000/config.json ...
	I1007 05:38:00.462267   14834 start.go:360] acquireMachinesLock for old-k8s-version-055000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:38:00.941778   14834 start.go:364] duration metric: took 479.458625ms to acquireMachinesLock for "old-k8s-version-055000"
	I1007 05:38:00.941928   14834 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:38:00.941957   14834 fix.go:54] fixHost starting: 
	I1007 05:38:00.942680   14834 fix.go:112] recreateIfNeeded on old-k8s-version-055000: state=Stopped err=<nil>
	W1007 05:38:00.942721   14834 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:38:00.958434   14834 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-055000" ...
	I1007 05:38:00.965333   14834 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:38:00.965690   14834 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:17:a2:2b:75:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/disk.qcow2
	I1007 05:38:00.976890   14834 main.go:141] libmachine: STDOUT: 
	I1007 05:38:00.976974   14834 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:38:00.977110   14834 fix.go:56] duration metric: took 35.149458ms for fixHost
	I1007 05:38:00.977132   14834 start.go:83] releasing machines lock for "old-k8s-version-055000", held for 35.272458ms
	W1007 05:38:00.977164   14834 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:38:00.977324   14834 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:38:00.977343   14834 start.go:729] Will try again in 5 seconds ...
	I1007 05:38:05.979439   14834 start.go:360] acquireMachinesLock for old-k8s-version-055000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:38:05.979864   14834 start.go:364] duration metric: took 284.292µs to acquireMachinesLock for "old-k8s-version-055000"
	I1007 05:38:05.979996   14834 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:38:05.980016   14834 fix.go:54] fixHost starting: 
	I1007 05:38:05.980704   14834 fix.go:112] recreateIfNeeded on old-k8s-version-055000: state=Stopped err=<nil>
	W1007 05:38:05.980730   14834 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:38:05.985439   14834 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-055000" ...
	I1007 05:38:05.992305   14834 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:38:05.992536   14834 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:17:a2:2b:75:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/old-k8s-version-055000/disk.qcow2
	I1007 05:38:06.002761   14834 main.go:141] libmachine: STDOUT: 
	I1007 05:38:06.002872   14834 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:38:06.002962   14834 fix.go:56] duration metric: took 22.945125ms for fixHost
	I1007 05:38:06.002980   14834 start.go:83] releasing machines lock for "old-k8s-version-055000", held for 23.091542ms
	W1007 05:38:06.003186   14834 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-055000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-055000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:38:06.011263   14834 out.go:201] 
	W1007 05:38:06.014286   14834 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:38:06.014329   14834 out.go:270] * 
	* 
	W1007 05:38:06.016747   14834 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:38:06.028227   14834 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-055000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-055000 -n old-k8s-version-055000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-055000 -n old-k8s-version-055000: exit status 7 (75.910542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-055000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-544000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-544000 create -f testdata/busybox.yaml: exit status 1 (29.017166ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-544000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-544000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-544000 -n no-preload-544000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-544000 -n no-preload-544000: exit status 7 (33.304125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-544000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-544000 -n no-preload-544000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-544000 -n no-preload-544000: exit status 7 (33.078125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-544000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-544000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-544000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-544000 describe deploy/metrics-server -n kube-system: exit status 1 (27.362709ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-544000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-544000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-544000 -n no-preload-544000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-544000 -n no-preload-544000: exit status 7 (33.667417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-544000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-544000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-544000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.191015083s)

                                                
                                                
-- stdout --
	* [no-preload-544000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-544000" primary control-plane node in "no-preload-544000" cluster
	* Restarting existing qemu2 VM for "no-preload-544000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-544000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:38:04.591962   14875 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:38:04.592124   14875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:04.592128   14875 out.go:358] Setting ErrFile to fd 2...
	I1007 05:38:04.592130   14875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:04.592265   14875 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:38:04.593343   14875 out.go:352] Setting JSON to false
	I1007 05:38:04.610848   14875 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7655,"bootTime":1728297029,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:38:04.610924   14875 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:38:04.614411   14875 out.go:177] * [no-preload-544000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:38:04.620376   14875 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:38:04.620464   14875 notify.go:220] Checking for updates...
	I1007 05:38:04.627384   14875 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:38:04.630372   14875 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:38:04.633338   14875 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:38:04.636408   14875 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:38:04.637617   14875 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:38:04.640719   14875 config.go:182] Loaded profile config "no-preload-544000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:38:04.640978   14875 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:38:04.645377   14875 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 05:38:04.650337   14875 start.go:297] selected driver: qemu2
	I1007 05:38:04.650343   14875 start.go:901] validating driver "qemu2" against &{Name:no-preload-544000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:no-preload-544000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:38:04.650396   14875 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:38:04.652794   14875 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:38:04.652825   14875 cni.go:84] Creating CNI manager for ""
	I1007 05:38:04.652846   14875 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:38:04.652885   14875 start.go:340] cluster config:
	{Name:no-preload-544000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-544000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:38:04.657354   14875 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:38:04.665358   14875 out.go:177] * Starting "no-preload-544000" primary control-plane node in "no-preload-544000" cluster
	I1007 05:38:04.669321   14875 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:38:04.669407   14875 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/no-preload-544000/config.json ...
	I1007 05:38:04.669433   14875 cache.go:107] acquiring lock: {Name:mk8efece51cdcb9f88d49f66f9abcf441e534f05 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:38:04.669432   14875 cache.go:107] acquiring lock: {Name:mk8679fc1b2ce53e2d9ce546115b09608e3115c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:38:04.669467   14875 cache.go:107] acquiring lock: {Name:mk79b2b276cd9c7fd0ff7bc76cca56bea2465974 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:38:04.669473   14875 cache.go:107] acquiring lock: {Name:mk2e19465d4b6204af4b367ea99f4b3b3f001c81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:38:04.669523   14875 cache.go:115] /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1007 05:38:04.669531   14875 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 101.958µs
	I1007 05:38:04.669538   14875 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1007 05:38:04.669546   14875 cache.go:115] /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1007 05:38:04.669550   14875 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 87.625µs
	I1007 05:38:04.669558   14875 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1007 05:38:04.669565   14875 cache.go:107] acquiring lock: {Name:mkea253b2dc7005337c78873fd6da4ea4c961676 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:38:04.669577   14875 cache.go:115] /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1007 05:38:04.669587   14875 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 141.916µs
	I1007 05:38:04.669597   14875 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1007 05:38:04.669589   14875 cache.go:115] /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1007 05:38:04.669606   14875 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 174.708µs
	I1007 05:38:04.669610   14875 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1007 05:38:04.669618   14875 cache.go:115] /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1007 05:38:04.669621   14875 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 57.25µs
	I1007 05:38:04.669625   14875 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1007 05:38:04.669645   14875 cache.go:107] acquiring lock: {Name:mk8acdf81d57a0bf8371d13cf7219384ff44afa1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:38:04.669637   14875 cache.go:107] acquiring lock: {Name:mkb35cfc8387aa09a1c063ea55a86a1600e6316e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:38:04.669657   14875 cache.go:107] acquiring lock: {Name:mk2094bce40b95b6445a7afaf790031048678998 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:38:04.669715   14875 cache.go:115] /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1007 05:38:04.669724   14875 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 118.625µs
	I1007 05:38:04.669729   14875 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1007 05:38:04.669745   14875 cache.go:115] /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1007 05:38:04.669752   14875 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 135.208µs
	I1007 05:38:04.669758   14875 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1007 05:38:04.669817   14875 start.go:360] acquireMachinesLock for no-preload-544000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:38:04.669857   14875 start.go:364] duration metric: took 32.875µs to acquireMachinesLock for "no-preload-544000"
	I1007 05:38:04.669870   14875 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:38:04.669874   14875 fix.go:54] fixHost starting: 
	I1007 05:38:04.669895   14875 cache.go:115] /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1007 05:38:04.669900   14875 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 330.875µs
	I1007 05:38:04.669904   14875 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1007 05:38:04.669908   14875 cache.go:87] Successfully saved all images to host disk.
	I1007 05:38:04.670001   14875 fix.go:112] recreateIfNeeded on no-preload-544000: state=Stopped err=<nil>
	W1007 05:38:04.670011   14875 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:38:04.678384   14875 out.go:177] * Restarting existing qemu2 VM for "no-preload-544000" ...
	I1007 05:38:04.682368   14875 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:38:04.682406   14875 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:97:81:4b:81:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/disk.qcow2
	I1007 05:38:04.684670   14875 main.go:141] libmachine: STDOUT: 
	I1007 05:38:04.684690   14875 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:38:04.684727   14875 fix.go:56] duration metric: took 14.849ms for fixHost
	I1007 05:38:04.684733   14875 start.go:83] releasing machines lock for "no-preload-544000", held for 14.871875ms
	W1007 05:38:04.684739   14875 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:38:04.684780   14875 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:38:04.684785   14875 start.go:729] Will try again in 5 seconds ...
	I1007 05:38:09.686920   14875 start.go:360] acquireMachinesLock for no-preload-544000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:38:09.687411   14875 start.go:364] duration metric: took 399µs to acquireMachinesLock for "no-preload-544000"
	I1007 05:38:09.687514   14875 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:38:09.687537   14875 fix.go:54] fixHost starting: 
	I1007 05:38:09.688322   14875 fix.go:112] recreateIfNeeded on no-preload-544000: state=Stopped err=<nil>
	W1007 05:38:09.688352   14875 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:38:09.701827   14875 out.go:177] * Restarting existing qemu2 VM for "no-preload-544000" ...
	I1007 05:38:09.704662   14875 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:38:09.704918   14875 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:97:81:4b:81:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/no-preload-544000/disk.qcow2
	I1007 05:38:09.715373   14875 main.go:141] libmachine: STDOUT: 
	I1007 05:38:09.715428   14875 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:38:09.715515   14875 fix.go:56] duration metric: took 27.980209ms for fixHost
	I1007 05:38:09.715537   14875 start.go:83] releasing machines lock for "no-preload-544000", held for 28.102291ms
	W1007 05:38:09.715715   14875 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-544000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-544000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:38:09.723708   14875 out.go:201] 
	W1007 05:38:09.726747   14875 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:38:09.726773   14875 out.go:270] * 
	* 
	W1007 05:38:09.729312   14875 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:38:09.737687   14875 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-544000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-544000 -n no-preload-544000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-544000 -n no-preload-544000: exit status 7 (75.106708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-544000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-055000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-055000 -n old-k8s-version-055000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-055000 -n old-k8s-version-055000: exit status 7 (35.432584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-055000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-055000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-055000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-055000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.445458ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-055000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-055000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-055000 -n old-k8s-version-055000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-055000 -n old-k8s-version-055000: exit status 7 (33.213417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-055000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-055000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-055000 -n old-k8s-version-055000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-055000 -n old-k8s-version-055000: exit status 7 (33.132208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-055000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-055000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-055000 --alsologtostderr -v=1: exit status 83 (46.186291ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-055000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-055000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:38:06.322964   14894 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:38:06.323388   14894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:06.323392   14894 out.go:358] Setting ErrFile to fd 2...
	I1007 05:38:06.323395   14894 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:06.323560   14894 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:38:06.323787   14894 out.go:352] Setting JSON to false
	I1007 05:38:06.323795   14894 mustload.go:65] Loading cluster: old-k8s-version-055000
	I1007 05:38:06.324013   14894 config.go:182] Loaded profile config "old-k8s-version-055000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1007 05:38:06.328798   14894 out.go:177] * The control-plane node old-k8s-version-055000 host is not running: state=Stopped
	I1007 05:38:06.332768   14894 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-055000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-055000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-055000 -n old-k8s-version-055000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-055000 -n old-k8s-version-055000: exit status 7 (33.250708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-055000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-055000 -n old-k8s-version-055000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-055000 -n old-k8s-version-055000: exit status 7 (33.490667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-055000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-860000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-860000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.01406875s)

                                                
                                                
-- stdout --
	* [embed-certs-860000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-860000" primary control-plane node in "embed-certs-860000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-860000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:38:06.660722   14911 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:38:06.660870   14911 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:06.660874   14911 out.go:358] Setting ErrFile to fd 2...
	I1007 05:38:06.660876   14911 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:06.661012   14911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:38:06.662200   14911 out.go:352] Setting JSON to false
	I1007 05:38:06.679817   14911 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7657,"bootTime":1728297029,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:38:06.679892   14911 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:38:06.684776   14911 out.go:177] * [embed-certs-860000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:38:06.691694   14911 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:38:06.691732   14911 notify.go:220] Checking for updates...
	I1007 05:38:06.698766   14911 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:38:06.701739   14911 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:38:06.704817   14911 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:38:06.707779   14911 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:38:06.710738   14911 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:38:06.714170   14911 config.go:182] Loaded profile config "multinode-062000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:38:06.714256   14911 config.go:182] Loaded profile config "no-preload-544000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:38:06.714304   14911 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:38:06.718788   14911 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:38:06.725726   14911 start.go:297] selected driver: qemu2
	I1007 05:38:06.725731   14911 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:38:06.725736   14911 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:38:06.728232   14911 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:38:06.731774   14911 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:38:06.734751   14911 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:38:06.734769   14911 cni.go:84] Creating CNI manager for ""
	I1007 05:38:06.734798   14911 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:38:06.734803   14911 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 05:38:06.734834   14911 start.go:340] cluster config:
	{Name:embed-certs-860000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:38:06.739799   14911 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:38:06.747725   14911 out.go:177] * Starting "embed-certs-860000" primary control-plane node in "embed-certs-860000" cluster
	I1007 05:38:06.751798   14911 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:38:06.751814   14911 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:38:06.751832   14911 cache.go:56] Caching tarball of preloaded images
	I1007 05:38:06.751915   14911 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:38:06.751920   14911 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:38:06.751993   14911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/embed-certs-860000/config.json ...
	I1007 05:38:06.752004   14911 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/embed-certs-860000/config.json: {Name:mke60c6e13b6d06fdab37af450e383f78f555360 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:38:06.752287   14911 start.go:360] acquireMachinesLock for embed-certs-860000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:38:06.752337   14911 start.go:364] duration metric: took 41.875µs to acquireMachinesLock for "embed-certs-860000"
	I1007 05:38:06.752350   14911 start.go:93] Provisioning new machine with config: &{Name:embed-certs-860000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:embed-certs-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:38:06.752384   14911 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:38:06.755661   14911 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:38:06.772762   14911 start.go:159] libmachine.API.Create for "embed-certs-860000" (driver="qemu2")
	I1007 05:38:06.772790   14911 client.go:168] LocalClient.Create starting
	I1007 05:38:06.772853   14911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:38:06.772912   14911 main.go:141] libmachine: Decoding PEM data...
	I1007 05:38:06.772928   14911 main.go:141] libmachine: Parsing certificate...
	I1007 05:38:06.772954   14911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:38:06.772982   14911 main.go:141] libmachine: Decoding PEM data...
	I1007 05:38:06.772990   14911 main.go:141] libmachine: Parsing certificate...
	I1007 05:38:06.773389   14911 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:38:06.915289   14911 main.go:141] libmachine: Creating SSH key...
	I1007 05:38:07.074350   14911 main.go:141] libmachine: Creating Disk image...
	I1007 05:38:07.074359   14911 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:38:07.074574   14911 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/disk.qcow2
	I1007 05:38:07.084855   14911 main.go:141] libmachine: STDOUT: 
	I1007 05:38:07.084878   14911 main.go:141] libmachine: STDERR: 
	I1007 05:38:07.084946   14911 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/disk.qcow2 +20000M
	I1007 05:38:07.093466   14911 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:38:07.093493   14911 main.go:141] libmachine: STDERR: 
	I1007 05:38:07.093510   14911 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/disk.qcow2
	I1007 05:38:07.093516   14911 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:38:07.093530   14911 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:38:07.093557   14911 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:5e:03:8a:89:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/disk.qcow2
	I1007 05:38:07.095495   14911 main.go:141] libmachine: STDOUT: 
	I1007 05:38:07.095513   14911 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:38:07.095536   14911 client.go:171] duration metric: took 322.745417ms to LocalClient.Create
	I1007 05:38:09.097762   14911 start.go:128] duration metric: took 2.34538575s to createHost
	I1007 05:38:09.097842   14911 start.go:83] releasing machines lock for "embed-certs-860000", held for 2.345539458s
	W1007 05:38:09.097899   14911 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:38:09.108121   14911 out.go:177] * Deleting "embed-certs-860000" in qemu2 ...
	W1007 05:38:09.137103   14911 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:38:09.137133   14911 start.go:729] Will try again in 5 seconds ...
	I1007 05:38:14.139219   14911 start.go:360] acquireMachinesLock for embed-certs-860000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:38:14.139808   14911 start.go:364] duration metric: took 428.375µs to acquireMachinesLock for "embed-certs-860000"
	I1007 05:38:14.139975   14911 start.go:93] Provisioning new machine with config: &{Name:embed-certs-860000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:embed-certs-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:38:14.140265   14911 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:38:14.145865   14911 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:38:14.195127   14911 start.go:159] libmachine.API.Create for "embed-certs-860000" (driver="qemu2")
	I1007 05:38:14.195286   14911 client.go:168] LocalClient.Create starting
	I1007 05:38:14.195436   14911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:38:14.195522   14911 main.go:141] libmachine: Decoding PEM data...
	I1007 05:38:14.195542   14911 main.go:141] libmachine: Parsing certificate...
	I1007 05:38:14.195629   14911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:38:14.195691   14911 main.go:141] libmachine: Decoding PEM data...
	I1007 05:38:14.195703   14911 main.go:141] libmachine: Parsing certificate...
	I1007 05:38:14.196348   14911 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:38:14.348969   14911 main.go:141] libmachine: Creating SSH key...
	I1007 05:38:14.576286   14911 main.go:141] libmachine: Creating Disk image...
	I1007 05:38:14.576299   14911 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:38:14.576561   14911 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/disk.qcow2
	I1007 05:38:14.587118   14911 main.go:141] libmachine: STDOUT: 
	I1007 05:38:14.587143   14911 main.go:141] libmachine: STDERR: 
	I1007 05:38:14.587208   14911 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/disk.qcow2 +20000M
	I1007 05:38:14.595728   14911 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:38:14.595753   14911 main.go:141] libmachine: STDERR: 
	I1007 05:38:14.595777   14911 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/disk.qcow2
	I1007 05:38:14.595782   14911 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:38:14.595795   14911 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:38:14.595825   14911 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:d7:49:22:68:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/disk.qcow2
	I1007 05:38:14.598630   14911 main.go:141] libmachine: STDOUT: 
	I1007 05:38:14.598661   14911 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:38:14.598676   14911 client.go:171] duration metric: took 403.390834ms to LocalClient.Create
	I1007 05:38:16.600821   14911 start.go:128] duration metric: took 2.460573125s to createHost
	I1007 05:38:16.600905   14911 start.go:83] releasing machines lock for "embed-certs-860000", held for 2.461092875s
	W1007 05:38:16.601265   14911 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-860000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-860000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:38:16.610861   14911 out.go:201] 
	W1007 05:38:16.615933   14911 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:38:16.615962   14911 out.go:270] * 
	* 
	W1007 05:38:16.618524   14911 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:38:16.628762   14911 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-860000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-860000 -n embed-certs-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-860000 -n embed-certs-860000: exit status 7 (74.306292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-544000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-544000 -n no-preload-544000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-544000 -n no-preload-544000: exit status 7 (35.873791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-544000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-544000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-544000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-544000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.668792ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-544000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-544000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-544000 -n no-preload-544000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-544000 -n no-preload-544000: exit status 7 (33.4165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-544000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-544000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-544000 -n no-preload-544000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-544000 -n no-preload-544000: exit status 7 (33.271875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-544000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-544000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-544000 --alsologtostderr -v=1: exit status 83 (44.161166ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-544000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-544000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:38:10.035539   14933 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:38:10.035750   14933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:10.035753   14933 out.go:358] Setting ErrFile to fd 2...
	I1007 05:38:10.035755   14933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:10.035895   14933 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:38:10.036125   14933 out.go:352] Setting JSON to false
	I1007 05:38:10.036134   14933 mustload.go:65] Loading cluster: no-preload-544000
	I1007 05:38:10.036369   14933 config.go:182] Loaded profile config "no-preload-544000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:38:10.040621   14933 out.go:177] * The control-plane node no-preload-544000 host is not running: state=Stopped
	I1007 05:38:10.043571   14933 out.go:177]   To start a cluster, run: "minikube start -p no-preload-544000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-544000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-544000 -n no-preload-544000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-544000 -n no-preload-544000: exit status 7 (32.680416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-544000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-544000 -n no-preload-544000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-544000 -n no-preload-544000: exit status 7 (33.259333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-544000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-878000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-878000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.927283417s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-878000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-878000" primary control-plane node in "default-k8s-diff-port-878000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-878000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:38:10.483319   14957 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:38:10.483483   14957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:10.483486   14957 out.go:358] Setting ErrFile to fd 2...
	I1007 05:38:10.483488   14957 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:10.483628   14957 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:38:10.484808   14957 out.go:352] Setting JSON to false
	I1007 05:38:10.502701   14957 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7661,"bootTime":1728297029,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:38:10.502780   14957 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:38:10.507725   14957 out.go:177] * [default-k8s-diff-port-878000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:38:10.513736   14957 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:38:10.513731   14957 notify.go:220] Checking for updates...
	I1007 05:38:10.520725   14957 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:38:10.523704   14957 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:38:10.526756   14957 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:38:10.529727   14957 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:38:10.531084   14957 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:38:10.534094   14957 config.go:182] Loaded profile config "embed-certs-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:38:10.534163   14957 config.go:182] Loaded profile config "multinode-062000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:38:10.534201   14957 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:38:10.538768   14957 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:38:10.543663   14957 start.go:297] selected driver: qemu2
	I1007 05:38:10.543669   14957 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:38:10.543682   14957 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:38:10.546037   14957 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:38:10.548703   14957 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:38:10.551888   14957 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:38:10.551913   14957 cni.go:84] Creating CNI manager for ""
	I1007 05:38:10.551936   14957 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:38:10.551941   14957 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 05:38:10.551979   14957 start.go:340] cluster config:
	{Name:default-k8s-diff-port-878000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:38:10.556558   14957 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:38:10.564699   14957 out.go:177] * Starting "default-k8s-diff-port-878000" primary control-plane node in "default-k8s-diff-port-878000" cluster
	I1007 05:38:10.568684   14957 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:38:10.568701   14957 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:38:10.568709   14957 cache.go:56] Caching tarball of preloaded images
	I1007 05:38:10.568809   14957 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:38:10.568815   14957 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:38:10.568896   14957 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/default-k8s-diff-port-878000/config.json ...
	I1007 05:38:10.568908   14957 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/default-k8s-diff-port-878000/config.json: {Name:mk15ff0631ead0b1b279f07deeb9e26eaedb70e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:38:10.569283   14957 start.go:360] acquireMachinesLock for default-k8s-diff-port-878000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:38:10.569351   14957 start.go:364] duration metric: took 46.416µs to acquireMachinesLock for "default-k8s-diff-port-878000"
	I1007 05:38:10.569365   14957 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-878000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:38:10.569406   14957 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:38:10.576635   14957 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:38:10.594203   14957 start.go:159] libmachine.API.Create for "default-k8s-diff-port-878000" (driver="qemu2")
	I1007 05:38:10.594231   14957 client.go:168] LocalClient.Create starting
	I1007 05:38:10.594305   14957 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:38:10.594344   14957 main.go:141] libmachine: Decoding PEM data...
	I1007 05:38:10.594355   14957 main.go:141] libmachine: Parsing certificate...
	I1007 05:38:10.594404   14957 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:38:10.594434   14957 main.go:141] libmachine: Decoding PEM data...
	I1007 05:38:10.594441   14957 main.go:141] libmachine: Parsing certificate...
	I1007 05:38:10.594829   14957 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:38:10.735937   14957 main.go:141] libmachine: Creating SSH key...
	I1007 05:38:10.851595   14957 main.go:141] libmachine: Creating Disk image...
	I1007 05:38:10.851601   14957 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:38:10.851782   14957 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/disk.qcow2
	I1007 05:38:10.861466   14957 main.go:141] libmachine: STDOUT: 
	I1007 05:38:10.861487   14957 main.go:141] libmachine: STDERR: 
	I1007 05:38:10.861541   14957 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/disk.qcow2 +20000M
	I1007 05:38:10.870010   14957 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:38:10.870025   14957 main.go:141] libmachine: STDERR: 
	I1007 05:38:10.870045   14957 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/disk.qcow2
	I1007 05:38:10.870051   14957 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:38:10.870068   14957 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:38:10.870098   14957 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:8b:86:73:ba:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/disk.qcow2
	I1007 05:38:10.871955   14957 main.go:141] libmachine: STDOUT: 
	I1007 05:38:10.871969   14957 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:38:10.871993   14957 client.go:171] duration metric: took 277.760833ms to LocalClient.Create
	I1007 05:38:12.874145   14957 start.go:128] duration metric: took 2.304758375s to createHost
	I1007 05:38:12.874271   14957 start.go:83] releasing machines lock for "default-k8s-diff-port-878000", held for 2.30494775s
	W1007 05:38:12.874327   14957 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:38:12.885152   14957 out.go:177] * Deleting "default-k8s-diff-port-878000" in qemu2 ...
	W1007 05:38:12.908846   14957 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:38:12.908893   14957 start.go:729] Will try again in 5 seconds ...
	I1007 05:38:17.910995   14957 start.go:360] acquireMachinesLock for default-k8s-diff-port-878000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:38:17.911420   14957 start.go:364] duration metric: took 333.542µs to acquireMachinesLock for "default-k8s-diff-port-878000"
	I1007 05:38:17.911560   14957 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-878000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:38:17.911824   14957 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:38:17.920516   14957 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:38:17.960144   14957 start.go:159] libmachine.API.Create for "default-k8s-diff-port-878000" (driver="qemu2")
	I1007 05:38:17.960196   14957 client.go:168] LocalClient.Create starting
	I1007 05:38:17.960327   14957 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:38:17.960386   14957 main.go:141] libmachine: Decoding PEM data...
	I1007 05:38:17.960407   14957 main.go:141] libmachine: Parsing certificate...
	I1007 05:38:17.960463   14957 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:38:17.960497   14957 main.go:141] libmachine: Decoding PEM data...
	I1007 05:38:17.960511   14957 main.go:141] libmachine: Parsing certificate...
	I1007 05:38:17.961108   14957 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:38:18.116961   14957 main.go:141] libmachine: Creating SSH key...
	I1007 05:38:18.313192   14957 main.go:141] libmachine: Creating Disk image...
	I1007 05:38:18.313200   14957 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:38:18.313407   14957 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/disk.qcow2
	I1007 05:38:18.323547   14957 main.go:141] libmachine: STDOUT: 
	I1007 05:38:18.323570   14957 main.go:141] libmachine: STDERR: 
	I1007 05:38:18.323632   14957 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/disk.qcow2 +20000M
	I1007 05:38:18.332009   14957 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:38:18.332024   14957 main.go:141] libmachine: STDERR: 
	I1007 05:38:18.332036   14957 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/disk.qcow2
	I1007 05:38:18.332042   14957 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:38:18.332050   14957 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:38:18.332088   14957 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:a9:87:49:9b:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/disk.qcow2
	I1007 05:38:18.333890   14957 main.go:141] libmachine: STDOUT: 
	I1007 05:38:18.333903   14957 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:38:18.333915   14957 client.go:171] duration metric: took 373.719292ms to LocalClient.Create
	I1007 05:38:20.336104   14957 start.go:128] duration metric: took 2.42428s to createHost
	I1007 05:38:20.336196   14957 start.go:83] releasing machines lock for "default-k8s-diff-port-878000", held for 2.424796292s
	W1007 05:38:20.336607   14957 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-878000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-878000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:38:20.349316   14957 out.go:201] 
	W1007 05:38:20.353431   14957 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:38:20.353467   14957 out.go:270] * 
	* 
	W1007 05:38:20.356254   14957 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:38:20.366327   14957 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-878000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-878000 -n default-k8s-diff-port-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-878000 -n default-k8s-diff-port-878000: exit status 7 (75.280666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-860000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-860000 create -f testdata/busybox.yaml: exit status 1 (29.57ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-860000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-860000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-860000 -n embed-certs-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-860000 -n embed-certs-860000: exit status 7 (33.228375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-860000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-860000 -n embed-certs-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-860000 -n embed-certs-860000: exit status 7 (33.168042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-860000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-860000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-860000 describe deploy/metrics-server -n kube-system: exit status 1 (27.335417ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-860000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-860000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-860000 -n embed-certs-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-860000 -n embed-certs-860000: exit status 7 (33.753875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-878000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-878000 create -f testdata/busybox.yaml: exit status 1 (31.825625ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-878000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-878000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-878000 -n default-k8s-diff-port-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-878000 -n default-k8s-diff-port-878000: exit status 7 (35.714416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-878000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-878000 -n default-k8s-diff-port-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-878000 -n default-k8s-diff-port-878000: exit status 7 (41.580083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-878000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-878000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-878000 describe deploy/metrics-server -n kube-system: exit status 1 (28.239166ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-878000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-878000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-878000 -n default-k8s-diff-port-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-878000 -n default-k8s-diff-port-878000: exit status 7 (37.634959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-860000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-860000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.197349042s)

                                                
                                                
-- stdout --
	* [embed-certs-860000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-860000" primary control-plane node in "embed-certs-860000" cluster
	* Restarting existing qemu2 VM for "embed-certs-860000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-860000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:38:20.618983   15020 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:38:20.619129   15020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:20.619133   15020 out.go:358] Setting ErrFile to fd 2...
	I1007 05:38:20.619135   15020 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:20.619260   15020 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:38:20.622234   15020 out.go:352] Setting JSON to false
	I1007 05:38:20.641913   15020 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7671,"bootTime":1728297029,"procs":527,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:38:20.642000   15020 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:38:20.645137   15020 out.go:177] * [embed-certs-860000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:38:20.652186   15020 notify.go:220] Checking for updates...
	I1007 05:38:20.656123   15020 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:38:20.660120   15020 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:38:20.663051   15020 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:38:20.666161   15020 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:38:20.669031   15020 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:38:20.672078   15020 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:38:20.675597   15020 config.go:182] Loaded profile config "embed-certs-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:38:20.675931   15020 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:38:20.679970   15020 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 05:38:20.689096   15020 start.go:297] selected driver: qemu2
	I1007 05:38:20.689104   15020 start.go:901] validating driver "qemu2" against &{Name:embed-certs-860000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:embed-certs-860000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:38:20.689169   15020 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:38:20.691718   15020 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:38:20.691749   15020 cni.go:84] Creating CNI manager for ""
	I1007 05:38:20.691770   15020 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:38:20.691799   15020 start.go:340] cluster config:
	{Name:embed-certs-860000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-860000 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:38:20.696245   15020 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:38:20.701055   15020 out.go:177] * Starting "embed-certs-860000" primary control-plane node in "embed-certs-860000" cluster
	I1007 05:38:20.707996   15020 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:38:20.708025   15020 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:38:20.708032   15020 cache.go:56] Caching tarball of preloaded images
	I1007 05:38:20.708132   15020 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:38:20.708144   15020 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:38:20.708202   15020 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/embed-certs-860000/config.json ...
	I1007 05:38:20.708585   15020 start.go:360] acquireMachinesLock for embed-certs-860000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:38:20.708618   15020 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "embed-certs-860000"
	I1007 05:38:20.708631   15020 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:38:20.708635   15020 fix.go:54] fixHost starting: 
	I1007 05:38:20.708750   15020 fix.go:112] recreateIfNeeded on embed-certs-860000: state=Stopped err=<nil>
	W1007 05:38:20.708760   15020 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:38:20.713078   15020 out.go:177] * Restarting existing qemu2 VM for "embed-certs-860000" ...
	I1007 05:38:20.721088   15020 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:38:20.721137   15020 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:d7:49:22:68:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/disk.qcow2
	I1007 05:38:20.723106   15020 main.go:141] libmachine: STDOUT: 
	I1007 05:38:20.723129   15020 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:38:20.723161   15020 fix.go:56] duration metric: took 14.523417ms for fixHost
	I1007 05:38:20.723168   15020 start.go:83] releasing machines lock for "embed-certs-860000", held for 14.546ms
	W1007 05:38:20.723174   15020 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:38:20.723226   15020 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:38:20.723230   15020 start.go:729] Will try again in 5 seconds ...
	I1007 05:38:25.725325   15020 start.go:360] acquireMachinesLock for embed-certs-860000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:38:25.725694   15020 start.go:364] duration metric: took 282.709µs to acquireMachinesLock for "embed-certs-860000"
	I1007 05:38:25.725807   15020 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:38:25.725825   15020 fix.go:54] fixHost starting: 
	I1007 05:38:25.726529   15020 fix.go:112] recreateIfNeeded on embed-certs-860000: state=Stopped err=<nil>
	W1007 05:38:25.726554   15020 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:38:25.735997   15020 out.go:177] * Restarting existing qemu2 VM for "embed-certs-860000" ...
	I1007 05:38:25.738951   15020 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:38:25.739178   15020 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:d7:49:22:68:a9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/embed-certs-860000/disk.qcow2
	I1007 05:38:25.749522   15020 main.go:141] libmachine: STDOUT: 
	I1007 05:38:25.749576   15020 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:38:25.749685   15020 fix.go:56] duration metric: took 23.862459ms for fixHost
	I1007 05:38:25.749705   15020 start.go:83] releasing machines lock for "embed-certs-860000", held for 23.991792ms
	W1007 05:38:25.749874   15020 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-860000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-860000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:38:25.756718   15020 out.go:201] 
	W1007 05:38:25.760984   15020 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:38:25.761018   15020 out.go:270] * 
	* 
	W1007 05:38:25.763424   15020 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:38:25.770901   15020 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-860000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-860000 -n embed-certs-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-860000 -n embed-certs-860000: exit status 7 (69.774625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-878000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-878000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.1968245s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-878000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-878000" primary control-plane node in "default-k8s-diff-port-878000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-878000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-878000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:38:23.928711   15053 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:38:23.928880   15053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:23.928884   15053 out.go:358] Setting ErrFile to fd 2...
	I1007 05:38:23.928886   15053 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:23.929002   15053 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:38:23.930105   15053 out.go:352] Setting JSON to false
	I1007 05:38:23.947707   15053 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7674,"bootTime":1728297029,"procs":527,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:38:23.947781   15053 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:38:23.953130   15053 out.go:177] * [default-k8s-diff-port-878000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:38:23.961035   15053 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:38:23.961145   15053 notify.go:220] Checking for updates...
	I1007 05:38:23.967934   15053 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:38:23.971060   15053 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:38:23.974105   15053 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:38:23.975530   15053 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:38:23.979025   15053 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:38:23.982422   15053 config.go:182] Loaded profile config "default-k8s-diff-port-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:38:23.982693   15053 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:38:23.986913   15053 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 05:38:23.994028   15053 start.go:297] selected driver: qemu2
	I1007 05:38:23.994034   15053 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-878000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:38:23.994092   15053 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:38:23.996605   15053 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 05:38:23.996632   15053 cni.go:84] Creating CNI manager for ""
	I1007 05:38:23.996652   15053 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:38:23.996686   15053 start.go:340] cluster config:
	{Name:default-k8s-diff-port-878000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-878000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:38:24.001138   15053 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:38:24.008998   15053 out.go:177] * Starting "default-k8s-diff-port-878000" primary control-plane node in "default-k8s-diff-port-878000" cluster
	I1007 05:38:24.012902   15053 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:38:24.012933   15053 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:38:24.012945   15053 cache.go:56] Caching tarball of preloaded images
	I1007 05:38:24.013008   15053 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:38:24.013014   15053 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:38:24.013092   15053 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/default-k8s-diff-port-878000/config.json ...
	I1007 05:38:24.013547   15053 start.go:360] acquireMachinesLock for default-k8s-diff-port-878000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:38:24.013576   15053 start.go:364] duration metric: took 23.792µs to acquireMachinesLock for "default-k8s-diff-port-878000"
	I1007 05:38:24.013585   15053 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:38:24.013590   15053 fix.go:54] fixHost starting: 
	I1007 05:38:24.013716   15053 fix.go:112] recreateIfNeeded on default-k8s-diff-port-878000: state=Stopped err=<nil>
	W1007 05:38:24.013726   15053 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:38:24.018040   15053 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-878000" ...
	I1007 05:38:24.025979   15053 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:38:24.026027   15053 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:a9:87:49:9b:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/disk.qcow2
	I1007 05:38:24.028175   15053 main.go:141] libmachine: STDOUT: 
	I1007 05:38:24.028195   15053 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:38:24.028228   15053 fix.go:56] duration metric: took 14.635875ms for fixHost
	I1007 05:38:24.028233   15053 start.go:83] releasing machines lock for "default-k8s-diff-port-878000", held for 14.652917ms
	W1007 05:38:24.028239   15053 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:38:24.028288   15053 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:38:24.028292   15053 start.go:729] Will try again in 5 seconds ...
	I1007 05:38:29.030436   15053 start.go:360] acquireMachinesLock for default-k8s-diff-port-878000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:38:29.030817   15053 start.go:364] duration metric: took 295.375µs to acquireMachinesLock for "default-k8s-diff-port-878000"
	I1007 05:38:29.030908   15053 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:38:29.030926   15053 fix.go:54] fixHost starting: 
	I1007 05:38:29.031499   15053 fix.go:112] recreateIfNeeded on default-k8s-diff-port-878000: state=Stopped err=<nil>
	W1007 05:38:29.031521   15053 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:38:29.040282   15053 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-878000" ...
	I1007 05:38:29.047305   15053 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:38:29.047494   15053 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:a9:87:49:9b:30 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/default-k8s-diff-port-878000/disk.qcow2
	I1007 05:38:29.057297   15053 main.go:141] libmachine: STDOUT: 
	I1007 05:38:29.057356   15053 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:38:29.057477   15053 fix.go:56] duration metric: took 26.553209ms for fixHost
	I1007 05:38:29.057499   15053 start.go:83] releasing machines lock for "default-k8s-diff-port-878000", held for 26.659792ms
	W1007 05:38:29.057680   15053 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-878000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-878000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:38:29.066155   15053 out.go:201] 
	W1007 05:38:29.069327   15053 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:38:29.069357   15053 out.go:270] * 
	* 
	W1007 05:38:29.071204   15053 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:38:29.080291   15053 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-878000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-878000 -n default-k8s-diff-port-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-878000 -n default-k8s-diff-port-878000: exit status 7 (70.351166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-860000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-860000 -n embed-certs-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-860000 -n embed-certs-860000: exit status 7 (35.0995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-860000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-860000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-860000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.347958ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-860000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-860000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-860000 -n embed-certs-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-860000 -n embed-certs-860000: exit status 7 (33.905375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-860000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-860000 -n embed-certs-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-860000 -n embed-certs-860000: exit status 7 (33.285708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-860000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-860000 --alsologtostderr -v=1: exit status 83 (45.08975ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-860000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-860000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:38:26.061767   15072 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:38:26.061952   15072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:26.061955   15072 out.go:358] Setting ErrFile to fd 2...
	I1007 05:38:26.061958   15072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:26.062105   15072 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:38:26.062345   15072 out.go:352] Setting JSON to false
	I1007 05:38:26.062352   15072 mustload.go:65] Loading cluster: embed-certs-860000
	I1007 05:38:26.062576   15072 config.go:182] Loaded profile config "embed-certs-860000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:38:26.067017   15072 out.go:177] * The control-plane node embed-certs-860000 host is not running: state=Stopped
	I1007 05:38:26.069961   15072 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-860000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-860000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-860000 -n embed-certs-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-860000 -n embed-certs-860000: exit status 7 (32.836125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-860000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-860000 -n embed-certs-860000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-860000 -n embed-certs-860000: exit status 7 (33.077875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-860000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-260000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-260000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (10.047330583s)

                                                
                                                
-- stdout --
	* [newest-cni-260000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-260000" primary control-plane node in "newest-cni-260000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-260000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:38:26.392751   15089 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:38:26.392924   15089 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:26.392927   15089 out.go:358] Setting ErrFile to fd 2...
	I1007 05:38:26.392930   15089 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:26.393063   15089 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:38:26.394203   15089 out.go:352] Setting JSON to false
	I1007 05:38:26.411856   15089 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7677,"bootTime":1728297029,"procs":527,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:38:26.411924   15089 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:38:26.417132   15089 out.go:177] * [newest-cni-260000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:38:26.424203   15089 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:38:26.424284   15089 notify.go:220] Checking for updates...
	I1007 05:38:26.430182   15089 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:38:26.433231   15089 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:38:26.436237   15089 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:38:26.439215   15089 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:38:26.442218   15089 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:38:26.445482   15089 config.go:182] Loaded profile config "default-k8s-diff-port-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:38:26.445541   15089 config.go:182] Loaded profile config "multinode-062000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:38:26.445587   15089 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:38:26.450217   15089 out.go:177] * Using the qemu2 driver based on user configuration
	I1007 05:38:26.457058   15089 start.go:297] selected driver: qemu2
	I1007 05:38:26.457064   15089 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:38:26.457069   15089 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:38:26.459493   15089 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1007 05:38:26.459530   15089 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1007 05:38:26.464144   15089 out.go:177] * Automatically selected the socket_vmnet network
	I1007 05:38:26.471233   15089 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1007 05:38:26.471265   15089 cni.go:84] Creating CNI manager for ""
	I1007 05:38:26.471289   15089 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:38:26.471295   15089 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 05:38:26.471335   15089 start.go:340] cluster config:
	{Name:newest-cni-260000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-260000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:38:26.476205   15089 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:38:26.483076   15089 out.go:177] * Starting "newest-cni-260000" primary control-plane node in "newest-cni-260000" cluster
	I1007 05:38:26.487137   15089 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:38:26.487162   15089 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:38:26.487170   15089 cache.go:56] Caching tarball of preloaded images
	I1007 05:38:26.487261   15089 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:38:26.487268   15089 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:38:26.487328   15089 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/newest-cni-260000/config.json ...
	I1007 05:38:26.487340   15089 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/newest-cni-260000/config.json: {Name:mk8cd66586f22193c043ce126cd8febe35f442a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:38:26.487729   15089 start.go:360] acquireMachinesLock for newest-cni-260000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:38:26.487780   15089 start.go:364] duration metric: took 45.25µs to acquireMachinesLock for "newest-cni-260000"
	I1007 05:38:26.487794   15089 start.go:93] Provisioning new machine with config: &{Name:newest-cni-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:newest-cni-260000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:38:26.487842   15089 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:38:26.495174   15089 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:38:26.513080   15089 start.go:159] libmachine.API.Create for "newest-cni-260000" (driver="qemu2")
	I1007 05:38:26.513105   15089 client.go:168] LocalClient.Create starting
	I1007 05:38:26.513199   15089 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:38:26.513239   15089 main.go:141] libmachine: Decoding PEM data...
	I1007 05:38:26.513253   15089 main.go:141] libmachine: Parsing certificate...
	I1007 05:38:26.513304   15089 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:38:26.513338   15089 main.go:141] libmachine: Decoding PEM data...
	I1007 05:38:26.513362   15089 main.go:141] libmachine: Parsing certificate...
	I1007 05:38:26.513797   15089 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:38:26.655309   15089 main.go:141] libmachine: Creating SSH key...
	I1007 05:38:26.947167   15089 main.go:141] libmachine: Creating Disk image...
	I1007 05:38:26.947178   15089 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:38:26.947419   15089 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/disk.qcow2
	I1007 05:38:26.957958   15089 main.go:141] libmachine: STDOUT: 
	I1007 05:38:26.957976   15089 main.go:141] libmachine: STDERR: 
	I1007 05:38:26.958046   15089 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/disk.qcow2 +20000M
	I1007 05:38:26.966586   15089 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:38:26.966599   15089 main.go:141] libmachine: STDERR: 
	I1007 05:38:26.966617   15089 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/disk.qcow2
	I1007 05:38:26.966630   15089 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:38:26.966641   15089 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:38:26.966675   15089 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:d8:60:a3:32:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/disk.qcow2
	I1007 05:38:26.968491   15089 main.go:141] libmachine: STDOUT: 
	I1007 05:38:26.968506   15089 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:38:26.968528   15089 client.go:171] duration metric: took 455.413417ms to LocalClient.Create
	I1007 05:38:28.970711   15089 start.go:128] duration metric: took 2.482863459s to createHost
	I1007 05:38:28.970782   15089 start.go:83] releasing machines lock for "newest-cni-260000", held for 2.483038666s
	W1007 05:38:28.970859   15089 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:38:28.981287   15089 out.go:177] * Deleting "newest-cni-260000" in qemu2 ...
	W1007 05:38:29.005464   15089 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:38:29.005489   15089 start.go:729] Will try again in 5 seconds ...
	I1007 05:38:34.007622   15089 start.go:360] acquireMachinesLock for newest-cni-260000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:38:34.008239   15089 start.go:364] duration metric: took 515.083µs to acquireMachinesLock for "newest-cni-260000"
	I1007 05:38:34.008401   15089 start.go:93] Provisioning new machine with config: &{Name:newest-cni-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.31.1 ClusterName:newest-cni-260000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube
-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1007 05:38:34.008726   15089 start.go:125] createHost starting for "" (driver="qemu2")
	I1007 05:38:34.014303   15089 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1007 05:38:34.065563   15089 start.go:159] libmachine.API.Create for "newest-cni-260000" (driver="qemu2")
	I1007 05:38:34.065606   15089 client.go:168] LocalClient.Create starting
	I1007 05:38:34.065744   15089 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/ca.pem
	I1007 05:38:34.065832   15089 main.go:141] libmachine: Decoding PEM data...
	I1007 05:38:34.065848   15089 main.go:141] libmachine: Parsing certificate...
	I1007 05:38:34.065904   15089 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18424-10771/.minikube/certs/cert.pem
	I1007 05:38:34.065967   15089 main.go:141] libmachine: Decoding PEM data...
	I1007 05:38:34.065986   15089 main.go:141] libmachine: Parsing certificate...
	I1007 05:38:34.066650   15089 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso...
	I1007 05:38:34.220534   15089 main.go:141] libmachine: Creating SSH key...
	I1007 05:38:34.343105   15089 main.go:141] libmachine: Creating Disk image...
	I1007 05:38:34.343115   15089 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1007 05:38:34.343308   15089 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/disk.qcow2.raw /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/disk.qcow2
	I1007 05:38:34.353123   15089 main.go:141] libmachine: STDOUT: 
	I1007 05:38:34.353149   15089 main.go:141] libmachine: STDERR: 
	I1007 05:38:34.353226   15089 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/disk.qcow2 +20000M
	I1007 05:38:34.361702   15089 main.go:141] libmachine: STDOUT: Image resized.
	
	I1007 05:38:34.361722   15089 main.go:141] libmachine: STDERR: 
	I1007 05:38:34.361739   15089 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/disk.qcow2
	I1007 05:38:34.361749   15089 main.go:141] libmachine: Starting QEMU VM...
	I1007 05:38:34.361758   15089 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:38:34.361795   15089 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:e7:f1:1c:55:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/disk.qcow2
	I1007 05:38:34.363627   15089 main.go:141] libmachine: STDOUT: 
	I1007 05:38:34.363645   15089 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:38:34.363659   15089 client.go:171] duration metric: took 298.051666ms to LocalClient.Create
	I1007 05:38:36.365773   15089 start.go:128] duration metric: took 2.357067917s to createHost
	I1007 05:38:36.365824   15089 start.go:83] releasing machines lock for "newest-cni-260000", held for 2.357585583s
	W1007 05:38:36.366169   15089 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-260000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-260000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:38:36.377784   15089 out.go:201] 
	W1007 05:38:36.381855   15089 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:38:36.381929   15089 out.go:270] * 
	* 
	W1007 05:38:36.384397   15089 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:38:36.398805   15089 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-260000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-260000 -n newest-cni-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-260000 -n newest-cni-260000: exit status 7 (70.7225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-260000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-878000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-878000 -n default-k8s-diff-port-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-878000 -n default-k8s-diff-port-878000: exit status 7 (34.47975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-878000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-878000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-878000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.197375ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-878000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-878000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-878000 -n default-k8s-diff-port-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-878000 -n default-k8s-diff-port-878000: exit status 7 (33.172041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-878000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-878000 -n default-k8s-diff-port-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-878000 -n default-k8s-diff-port-878000: exit status 7 (32.864375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-878000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-878000 --alsologtostderr -v=1: exit status 83 (43.799125ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-878000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:38:29.368432   15111 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:38:29.368622   15111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:29.368625   15111 out.go:358] Setting ErrFile to fd 2...
	I1007 05:38:29.368627   15111 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:29.368755   15111 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:38:29.369000   15111 out.go:352] Setting JSON to false
	I1007 05:38:29.369009   15111 mustload.go:65] Loading cluster: default-k8s-diff-port-878000
	I1007 05:38:29.369219   15111 config.go:182] Loaded profile config "default-k8s-diff-port-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:38:29.373632   15111 out.go:177] * The control-plane node default-k8s-diff-port-878000 host is not running: state=Stopped
	I1007 05:38:29.376416   15111 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-878000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-878000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-878000 -n default-k8s-diff-port-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-878000 -n default-k8s-diff-port-878000: exit status 7 (33.396709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-878000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-878000 -n default-k8s-diff-port-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-878000 -n default-k8s-diff-port-878000: exit status 7 (32.908959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-260000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-260000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.200220959s)

                                                
                                                
-- stdout --
	* [newest-cni-260000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-260000" primary control-plane node in "newest-cni-260000" cluster
	* Restarting existing qemu2 VM for "newest-cni-260000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-260000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:38:40.048262   15158 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:38:40.048416   15158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:40.048419   15158 out.go:358] Setting ErrFile to fd 2...
	I1007 05:38:40.048422   15158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:40.048556   15158 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:38:40.049630   15158 out.go:352] Setting JSON to false
	I1007 05:38:40.067472   15158 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":7691,"bootTime":1728297029,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:38:40.067541   15158 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:38:40.073116   15158 out.go:177] * [newest-cni-260000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:38:40.080008   15158 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:38:40.080043   15158 notify.go:220] Checking for updates...
	I1007 05:38:40.086986   15158 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:38:40.089993   15158 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:38:40.093019   15158 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:38:40.096065   15158 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:38:40.098971   15158 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:38:40.102381   15158 config.go:182] Loaded profile config "newest-cni-260000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:38:40.102655   15158 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:38:40.106928   15158 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 05:38:40.113988   15158 start.go:297] selected driver: qemu2
	I1007 05:38:40.113995   15158 start.go:901] validating driver "qemu2" against &{Name:newest-cni-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:newest-cni-260000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Li
stenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:38:40.114052   15158 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:38:40.116523   15158 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1007 05:38:40.116546   15158 cni.go:84] Creating CNI manager for ""
	I1007 05:38:40.116573   15158 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:38:40.116606   15158 start.go:340] cluster config:
	{Name:newest-cni-260000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-260000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:38:40.121197   15158 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:38:40.128985   15158 out.go:177] * Starting "newest-cni-260000" primary control-plane node in "newest-cni-260000" cluster
	I1007 05:38:40.133036   15158 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:38:40.133057   15158 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:38:40.133067   15158 cache.go:56] Caching tarball of preloaded images
	I1007 05:38:40.133151   15158 preload.go:172] Found /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 05:38:40.133157   15158 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1007 05:38:40.133238   15158 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/newest-cni-260000/config.json ...
	I1007 05:38:40.133831   15158 start.go:360] acquireMachinesLock for newest-cni-260000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:38:40.133866   15158 start.go:364] duration metric: took 27.792µs to acquireMachinesLock for "newest-cni-260000"
	I1007 05:38:40.133877   15158 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:38:40.133883   15158 fix.go:54] fixHost starting: 
	I1007 05:38:40.134017   15158 fix.go:112] recreateIfNeeded on newest-cni-260000: state=Stopped err=<nil>
	W1007 05:38:40.134028   15158 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:38:40.138016   15158 out.go:177] * Restarting existing qemu2 VM for "newest-cni-260000" ...
	I1007 05:38:40.145909   15158 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:38:40.145951   15158 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:e7:f1:1c:55:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/disk.qcow2
	I1007 05:38:40.148418   15158 main.go:141] libmachine: STDOUT: 
	I1007 05:38:40.148439   15158 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:38:40.148471   15158 fix.go:56] duration metric: took 14.586917ms for fixHost
	I1007 05:38:40.148477   15158 start.go:83] releasing machines lock for "newest-cni-260000", held for 14.605916ms
	W1007 05:38:40.148489   15158 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:38:40.148531   15158 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:38:40.148537   15158 start.go:729] Will try again in 5 seconds ...
	I1007 05:38:45.150705   15158 start.go:360] acquireMachinesLock for newest-cni-260000: {Name:mk745ffd48e134b9f7ed6a74a531d8d6760d77af Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1007 05:38:45.151329   15158 start.go:364] duration metric: took 519.833µs to acquireMachinesLock for "newest-cni-260000"
	I1007 05:38:45.151471   15158 start.go:96] Skipping create...Using existing machine configuration
	I1007 05:38:45.151501   15158 fix.go:54] fixHost starting: 
	I1007 05:38:45.152276   15158 fix.go:112] recreateIfNeeded on newest-cni-260000: state=Stopped err=<nil>
	W1007 05:38:45.152304   15158 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 05:38:45.161929   15158 out.go:177] * Restarting existing qemu2 VM for "newest-cni-260000" ...
	I1007 05:38:45.167962   15158 qemu.go:418] Using hvf for hardware acceleration
	I1007 05:38:45.168180   15158 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:e7:f1:1c:55:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18424-10771/.minikube/machines/newest-cni-260000/disk.qcow2
	I1007 05:38:45.179218   15158 main.go:141] libmachine: STDOUT: 
	I1007 05:38:45.179266   15158 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1007 05:38:45.179353   15158 fix.go:56] duration metric: took 27.853458ms for fixHost
	I1007 05:38:45.179374   15158 start.go:83] releasing machines lock for "newest-cni-260000", held for 28.022042ms
	W1007 05:38:45.179581   15158 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-260000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-260000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1007 05:38:45.186953   15158 out.go:201] 
	W1007 05:38:45.190959   15158 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1007 05:38:45.190995   15158 out.go:270] * 
	* 
	W1007 05:38:45.193594   15158 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:38:45.201875   15158 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-260000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-260000 -n newest-cni-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-260000 -n newest-cni-260000: exit status 7 (74.838709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-260000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-260000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-260000 -n newest-cni-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-260000 -n newest-cni-260000: exit status 7 (35.126459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-260000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-260000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-260000 --alsologtostderr -v=1: exit status 83 (47.429667ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-260000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-260000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:38:45.406090   15174 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:38:45.406287   15174 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:45.406290   15174 out.go:358] Setting ErrFile to fd 2...
	I1007 05:38:45.406293   15174 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:38:45.406430   15174 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:38:45.406669   15174 out.go:352] Setting JSON to false
	I1007 05:38:45.406677   15174 mustload.go:65] Loading cluster: newest-cni-260000
	I1007 05:38:45.406898   15174 config.go:182] Loaded profile config "newest-cni-260000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:38:45.411578   15174 out.go:177] * The control-plane node newest-cni-260000 host is not running: state=Stopped
	I1007 05:38:45.415477   15174 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-260000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-260000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-260000 -n newest-cni-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-260000 -n newest-cni-260000: exit status 7 (35.430708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-260000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-260000 -n newest-cni-260000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-260000 -n newest-cni-260000: exit status 7 (34.821875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-260000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.12s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.1/json-events 18.55
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.12
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.29
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
35 TestHyperKitDriverInstallOrUpdate 10.93
39 TestErrorSpam/start 0.4
40 TestErrorSpam/status 0.1
41 TestErrorSpam/pause 0.14
42 TestErrorSpam/unpause 0.13
43 TestErrorSpam/stop 7.28
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.93
55 TestFunctional/serial/CacheCmd/cache/add_local 1.04
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
60 TestFunctional/serial/CacheCmd/cache/delete 0.08
69 TestFunctional/parallel/ConfigCmd 0.24
71 TestFunctional/parallel/DryRun 0.29
72 TestFunctional/parallel/InternationalLanguage 0.12
78 TestFunctional/parallel/AddonsCmd 0.1
93 TestFunctional/parallel/License 1.34
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 1.7
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.08
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.08
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.1
126 TestFunctional/parallel/ProfileCmd/profile_list 0.09
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.09
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.05
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.17
135 TestFunctional/delete_echo-server_images 0.07
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 3.36
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.21
193 TestMainNoArgs 0.04
240 TestStoppedBinaryUpgrade/Setup 4.81
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
257 TestNoKubernetes/serial/ProfileList 31.51
258 TestNoKubernetes/serial/Stop 3.91
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
268 TestStoppedBinaryUpgrade/MinikubeLogs 0.72
277 TestStartStop/group/old-k8s-version/serial/Stop 3.59
278 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
282 TestStartStop/group/no-preload/serial/Stop 3.15
283 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
299 TestStartStop/group/embed-certs/serial/Stop 3.54
301 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.11
304 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.08
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.07
319 TestStartStop/group/newest-cni/serial/Stop 3.33
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1007 05:12:42.455139   11284 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1007 05:12:42.455936   11284 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-839000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-839000: exit status 85 (100.276334ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-839000 | jenkins | v1.34.0 | 07 Oct 24 05:12 PDT |          |
	|         | -p download-only-839000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 05:12:02
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 05:12:02.842563   11285 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:12:02.842743   11285 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:12:02.842748   11285 out.go:358] Setting ErrFile to fd 2...
	I1007 05:12:02.842750   11285 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:12:02.842928   11285 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	W1007 05:12:02.843013   11285 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18424-10771/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18424-10771/.minikube/config/config.json: no such file or directory
	I1007 05:12:02.844863   11285 out.go:352] Setting JSON to true
	I1007 05:12:02.865401   11285 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6093,"bootTime":1728297029,"procs":526,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:12:02.865483   11285 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:12:02.870953   11285 out.go:97] [download-only-839000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	W1007 05:12:02.871202   11285 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball: no such file or directory
	I1007 05:12:02.871166   11285 notify.go:220] Checking for updates...
	I1007 05:12:02.874928   11285 out.go:169] MINIKUBE_LOCATION=18424
	I1007 05:12:02.884979   11285 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:12:02.891955   11285 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:12:02.902912   11285 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:12:02.911751   11285 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	W1007 05:12:02.919954   11285 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1007 05:12:02.920261   11285 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:12:02.923919   11285 out.go:97] Using the qemu2 driver based on user configuration
	I1007 05:12:02.923942   11285 start.go:297] selected driver: qemu2
	I1007 05:12:02.923960   11285 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:12:02.924049   11285 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:12:02.926895   11285 out.go:169] Automatically selected the socket_vmnet network
	I1007 05:12:02.933942   11285 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1007 05:12:02.934046   11285 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 05:12:02.934090   11285 cni.go:84] Creating CNI manager for ""
	I1007 05:12:02.934129   11285 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1007 05:12:02.934198   11285 start.go:340] cluster config:
	{Name:download-only-839000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-839000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:12:02.939417   11285 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:12:02.943957   11285 out.go:97] Downloading VM boot image ...
	I1007 05:12:02.943978   11285 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/iso/arm64/minikube-v1.34.0-1727108440-19696-arm64.iso
	I1007 05:12:22.092074   11285 out.go:97] Starting "download-only-839000" primary control-plane node in "download-only-839000" cluster
	I1007 05:12:22.092110   11285 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1007 05:12:22.370915   11285 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1007 05:12:22.370975   11285 cache.go:56] Caching tarball of preloaded images
	I1007 05:12:22.371808   11285 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1007 05:12:22.375889   11285 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1007 05:12:22.375912   11285 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1007 05:12:22.934822   11285 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1007 05:12:41.131754   11285 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1007 05:12:41.131929   11285 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1007 05:12:41.826141   11285 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1007 05:12:41.826343   11285 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/download-only-839000/config.json ...
	I1007 05:12:41.826362   11285 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18424-10771/.minikube/profiles/download-only-839000/config.json: {Name:mk943d696e5b531ba5c348b81f378c7e975b4cce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 05:12:41.826628   11285 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1007 05:12:41.826863   11285 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1007 05:12:42.405224   11285 out.go:193] 
	W1007 05:12:42.408338   11285 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18424-10771/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106234f60 0x106234f60 0x106234f60 0x106234f60 0x106234f60 0x106234f60 0x106234f60] Decompressors:map[bz2:0x14000800c40 gz:0x14000800c48 tar:0x14000800bc0 tar.bz2:0x14000800be0 tar.gz:0x14000800bf0 tar.xz:0x14000800c00 tar.zst:0x14000800c20 tbz2:0x14000800be0 tgz:0x14000800bf0 txz:0x14000800c00 tzst:0x14000800c20 xz:0x14000800c50 zip:0x14000800c60 zst:0x14000800c58] Getters:map[file:0x140014a0560 http:0x1400071c140 https:0x1400071c190] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1007 05:12:42.408362   11285 out_reason.go:110] 
	W1007 05:12:42.415234   11285 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 05:12:42.419256   11285 out.go:193] 
	
	
	* The control-plane node download-only-839000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-839000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-839000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (18.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-318000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-318000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (18.550574208s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (18.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1007 05:13:01.381205   11284 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1007 05:13:01.381269   11284 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-318000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-318000: exit status 85 (84.105084ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-839000 | jenkins | v1.34.0 | 07 Oct 24 05:12 PDT |                     |
	|         | -p download-only-839000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 07 Oct 24 05:12 PDT | 07 Oct 24 05:12 PDT |
	| delete  | -p download-only-839000        | download-only-839000 | jenkins | v1.34.0 | 07 Oct 24 05:12 PDT | 07 Oct 24 05:12 PDT |
	| start   | -o=json --download-only        | download-only-318000 | jenkins | v1.34.0 | 07 Oct 24 05:12 PDT |                     |
	|         | -p download-only-318000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 05:12:42
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.0 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 05:12:42.863197   11313 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:12:42.863341   11313 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:12:42.863344   11313 out.go:358] Setting ErrFile to fd 2...
	I1007 05:12:42.863346   11313 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:12:42.863478   11313 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:12:42.864601   11313 out.go:352] Setting JSON to true
	I1007 05:12:42.882276   11313 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6133,"bootTime":1728297029,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:12:42.882356   11313 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:12:42.887370   11313 out.go:97] [download-only-318000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:12:42.887481   11313 notify.go:220] Checking for updates...
	I1007 05:12:42.891271   11313 out.go:169] MINIKUBE_LOCATION=18424
	I1007 05:12:42.894314   11313 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:12:42.898313   11313 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:12:42.901260   11313 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:12:42.904296   11313 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	W1007 05:12:42.910266   11313 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1007 05:12:42.910477   11313 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:12:42.913259   11313 out.go:97] Using the qemu2 driver based on user configuration
	I1007 05:12:42.913275   11313 start.go:297] selected driver: qemu2
	I1007 05:12:42.913280   11313 start.go:901] validating driver "qemu2" against <nil>
	I1007 05:12:42.913332   11313 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 05:12:42.916303   11313 out.go:169] Automatically selected the socket_vmnet network
	I1007 05:12:42.921640   11313 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1007 05:12:42.921727   11313 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 05:12:42.921750   11313 cni.go:84] Creating CNI manager for ""
	I1007 05:12:42.921778   11313 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1007 05:12:42.921789   11313 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1007 05:12:42.921829   11313 start.go:340] cluster config:
	{Name:download-only-318000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-318000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:12:42.926171   11313 iso.go:125] acquiring lock: {Name:mk88ad05eb5c119bfbbf04854eff8ec5427df733 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 05:12:42.929323   11313 out.go:97] Starting "download-only-318000" primary control-plane node in "download-only-318000" cluster
	I1007 05:12:42.929332   11313 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:12:43.534692   11313 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1007 05:12:43.534766   11313 cache.go:56] Caching tarball of preloaded images
	I1007 05:12:43.535728   11313 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1007 05:12:43.541336   11313 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1007 05:12:43.541367   11313 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I1007 05:12:44.112229   11313 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/18424-10771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-318000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-318000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-318000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.29s)

                                                
                                                
=== RUN   TestBinaryMirror
I1007 05:13:01.913166   11284 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-533000 --alsologtostderr --binary-mirror http://127.0.0.1:52031 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-533000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-533000
--- PASS: TestBinaryMirror (0.29s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:934: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-708000
addons_test.go:934: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-708000: exit status 85 (63.326209ms)

                                                
                                                
-- stdout --
	* Profile "addons-708000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-708000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:945: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-708000
addons_test.go:945: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-708000: exit status 85 (67.1025ms)

                                                
                                                
-- stdout --
	* Profile "addons-708000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-708000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.93s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1007 05:23:44.618301   11284 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1007 05:23:44.618436   11284 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W1007 05:23:46.605895   11284 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1007 05:23:46.606101   11284 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1007 05:23:46.606147   11284 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate49856992/001/docker-machine-driver-hyperkit
I1007 05:23:47.149826   11284 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate49856992/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1071fe380 0x1071fe380 0x1071fe380 0x1071fe380 0x1071fe380 0x1071fe380 0x1071fe380] Decompressors:map[bz2:0x1400051b620 gz:0x1400051b628 tar:0x1400051b5b0 tar.bz2:0x1400051b5c0 tar.gz:0x1400051b5e0 tar.xz:0x1400051b5f0 tar.zst:0x1400051b610 tbz2:0x1400051b5c0 tgz:0x1400051b5e0 txz:0x1400051b5f0 tzst:0x1400051b610 xz:0x1400051b630 zip:0x1400051b670 zst:0x1400051b638] Getters:map[file:0x14000793c80 http:0x14000c957c0 https:0x14000c95810] Dir:false ProgressListener:<nil> Insecure:false DisableSyml
inks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1007 05:23:47.149967   11284 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate49856992/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (10.93s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 status: exit status 7 (35.560625ms)

                                                
                                                
-- stdout --
	nospam-561000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 status: exit status 7 (34.873708ms)

                                                
                                                
-- stdout --
	nospam-561000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 status: exit status 7 (33.69025ms)

                                                
                                                
-- stdout --
	nospam-561000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.10s)

                                                
                                    
x
+
TestErrorSpam/pause (0.14s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 pause: exit status 83 (44.900542ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-561000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-561000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 pause: exit status 83 (44.998333ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-561000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-561000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 pause: exit status 83 (44.647083ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-561000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-561000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.14s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 unpause: exit status 83 (43.831125ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-561000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-561000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 unpause: exit status 83 (41.925916ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-561000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-561000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 unpause: exit status 83 (43.789375ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-561000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-561000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.13s)

                                                
                                    
x
+
TestErrorSpam/stop (7.28s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 stop: (1.78664575s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 stop: (3.56602725s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-561000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-561000 stop: (1.920604541s)
--- PASS: TestErrorSpam/stop (7.28s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/18424-10771/.minikube/files/etc/test/nested/copy/11284/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-359000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1407767233/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 cache add minikube-local-cache-test:functional-359000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 cache delete minikube-local-cache-test:functional-359000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-359000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 config get cpus: exit status 14 (35.252583ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 config get cpus: exit status 14 (36.414083ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-359000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-359000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (169.880791ms)

                                                
                                                
-- stdout --
	* [functional-359000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:14:38.110476   11872 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:14:38.110641   11872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:14:38.110645   11872 out.go:358] Setting ErrFile to fd 2...
	I1007 05:14:38.110648   11872 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:14:38.110825   11872 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:14:38.112142   11872 out.go:352] Setting JSON to false
	I1007 05:14:38.132249   11872 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6249,"bootTime":1728297029,"procs":538,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:14:38.132315   11872 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:14:38.137197   11872 out.go:177] * [functional-359000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1007 05:14:38.144034   11872 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:14:38.144082   11872 notify.go:220] Checking for updates...
	I1007 05:14:38.150976   11872 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:14:38.154000   11872 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:14:38.157049   11872 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:14:38.160004   11872 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:14:38.163048   11872 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:14:38.166406   11872 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:14:38.166708   11872 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:14:38.169992   11872 out.go:177] * Using the qemu2 driver based on existing profile
	I1007 05:14:38.177054   11872 start.go:297] selected driver: qemu2
	I1007 05:14:38.177060   11872 start.go:901] validating driver "qemu2" against &{Name:functional-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:functional-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:14:38.177134   11872 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:14:38.183990   11872 out.go:201] 
	W1007 05:14:38.188020   11872 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1007 05:14:38.192075   11872 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-359000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-359000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-359000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (120.981333ms)

                                                
                                                
-- stdout --
	* [functional-359000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 05:14:38.348292   11883 out.go:345] Setting OutFile to fd 1 ...
	I1007 05:14:38.348451   11883 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:14:38.348453   11883 out.go:358] Setting ErrFile to fd 2...
	I1007 05:14:38.348456   11883 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 05:14:38.348597   11883 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18424-10771/.minikube/bin
	I1007 05:14:38.350016   11883 out.go:352] Setting JSON to false
	I1007 05:14:38.368395   11883 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6249,"bootTime":1728297029,"procs":538,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1007 05:14:38.368482   11883 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1007 05:14:38.372837   11883 out.go:177] * [functional-359000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	I1007 05:14:38.383361   11883 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 05:14:38.383441   11883 notify.go:220] Checking for updates...
	I1007 05:14:38.390794   11883 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	I1007 05:14:38.393800   11883 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1007 05:14:38.396848   11883 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 05:14:38.399768   11883 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	I1007 05:14:38.402754   11883 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 05:14:38.406143   11883 config.go:182] Loaded profile config "functional-359000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1007 05:14:38.406436   11883 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 05:14:38.409681   11883 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1007 05:14:38.416765   11883 start.go:297] selected driver: qemu2
	I1007 05:14:38.416771   11883 start.go:901] validating driver "qemu2" against &{Name:functional-359000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19696/minikube-v1.34.0-1727108440-19696-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.31.1 ClusterName:functional-359000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 05:14:38.416834   11883 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 05:14:38.423750   11883 out.go:201] 
	W1007 05:14:38.427771   11883 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1007 05:14:38.431786   11883 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2288: (dbg) Done: out/minikube-darwin-arm64 license: (1.342198625s)
--- PASS: TestFunctional/parallel/License (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.670928917s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-359000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-359000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 image rm kicbase/echo-server:functional-359000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-359000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 image save --daemon kicbase/echo-server:functional-359000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-359000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "52.761084ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "38.958541ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "53.907167ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "38.806333ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.013346834s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-359000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-359000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-359000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-359000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-174000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-174000 --output=json --user=testUser: (3.364294083s)
--- PASS: TestJSONOutput/stop/Command (3.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-914000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-914000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (99.87225ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a21e960c-81d4-4f6f-9625-0d3fa5940fe2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-914000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7934184d-9354-4845-b3e1-aa5720968e89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18424"}}
	{"specversion":"1.0","id":"8e5803a4-e2cd-4138-93ed-def61ba8e4ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig"}}
	{"specversion":"1.0","id":"bad9e178-b350-4d20-a9ac-bebd33d170ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"307577d7-899e-470c-90ae-840ca1f7711a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"846288c5-1a89-4f16-9afb-78bf7b569585","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube"}}
	{"specversion":"1.0","id":"8c76163a-c6fc-4030-861b-d98ae0deab37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6789a65e-d722-4cde-aeb2-239d633d55fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-914000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-914000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-090000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-090000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (104.769541ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-090000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18424-10771/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18424-10771/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-090000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-090000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (46.697708ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-090000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-090000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.692596541s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.818032833s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-090000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-090000: (3.914430084s)
--- PASS: TestNoKubernetes/serial/Stop (3.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-090000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-090000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (45.544583ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-090000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-090000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-431000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-055000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-055000 --alsologtostderr -v=3: (3.586435875s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-055000 -n old-k8s-version-055000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-055000 -n old-k8s-version-055000: exit status 7 (58.975459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-055000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-544000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-544000 --alsologtostderr -v=3: (3.145633833s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-544000 -n no-preload-544000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-544000 -n no-preload-544000: exit status 7 (61.917875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-544000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-860000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-860000 --alsologtostderr -v=3: (3.541501875s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-860000 -n embed-certs-860000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-860000 -n embed-certs-860000: exit status 7 (40.479542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-860000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-878000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-878000 --alsologtostderr -v=3: (3.079274834s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-878000 -n default-k8s-diff-port-878000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-878000 -n default-k8s-diff-port-878000: exit status 7 (60.10025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-878000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-260000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-260000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-260000 --alsologtostderr -v=3: (3.330861417s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-260000 -n newest-cni-260000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-260000 -n newest-cni-260000: exit status 7 (63.724625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-260000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-359000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port257259875/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728303241358900000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port257259875/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728303241358900000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port257259875/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728303241358900000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port257259875/001/test-1728303241358900000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (57.681833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
I1007 05:14:01.417133   11284 retry.go:31] will retry after 332.348376ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.492625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
I1007 05:14:01.842293   11284 retry.go:31] will retry after 946.495546ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (94.353459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
I1007 05:14:02.884202   11284 retry.go:31] will retry after 1.238626478s: exit status 83
I1007 05:14:03.800719   11284 retry.go:31] will retry after 6.123209421s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.197833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
I1007 05:14:04.216282   11284 retry.go:31] will retry after 1.901919824s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.165ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
I1007 05:14:06.211705   11284 retry.go:31] will retry after 2.123625971s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.061417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
I1007 05:14:08.428734   11284 retry.go:31] will retry after 2.858005025s: exit status 83
I1007 05:14:09.926754   11284 retry.go:31] will retry after 8.134113505s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.313292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "sudo umount -f /mount-9p": exit status 83 (47.685084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-359000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-359000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port257259875/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (10.19s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (12.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-359000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2513160891/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (67.784792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
I1007 05:14:11.618504   11284 retry.go:31] will retry after 697.716045ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.082333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
I1007 05:14:12.409609   11284 retry.go:31] will retry after 1.015449691s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.508709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
I1007 05:14:13.520017   11284 retry.go:31] will retry after 1.059830861s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.107625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
I1007 05:14:14.674334   11284 retry.go:31] will retry after 2.00076589s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.086958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
I1007 05:14:16.769572   11284 retry.go:31] will retry after 2.436343939s: exit status 83
I1007 05:14:18.063896   11284 retry.go:31] will retry after 14.587107074s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.001584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
I1007 05:14:19.298303   11284 retry.go:31] will retry after 4.531045184s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.134667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "sudo umount -f /mount-9p": exit status 83 (48.890458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-359000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-359000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2513160891/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (12.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (13.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-359000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1223287948/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-359000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1223287948/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-359000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1223287948/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T" /mount1: exit status 83 (85.626792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
I1007 05:14:24.168837   11284 retry.go:31] will retry after 303.056464ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T" /mount1: exit status 83 (92.172375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
I1007 05:14:24.565409   11284 retry.go:31] will retry after 1.054514016s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T" /mount1: exit status 83 (92.975208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
I1007 05:14:25.712587   11284 retry.go:31] will retry after 1.056613759s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T" /mount1: exit status 83 (92.377125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
I1007 05:14:26.861415   11284 retry.go:31] will retry after 1.02037745s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T" /mount1: exit status 83 (93.995792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
I1007 05:14:27.975899   11284 retry.go:31] will retry after 2.342559046s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T" /mount1: exit status 83 (89.719916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
I1007 05:14:30.406180   11284 retry.go:31] will retry after 2.305908627s: exit status 83
I1007 05:14:32.625941   11284 retry.go:31] will retry after 11.46638003s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T" /mount1: exit status 83 (91.567625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
I1007 05:14:32.802354   11284 retry.go:31] will retry after 4.756862958s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-359000 ssh "findmnt -T" /mount1: exit status 83 (89.462083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-359000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-359000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-359000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1223287948/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-359000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1223287948/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-359000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1223287948/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (13.98s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-585000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-585000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-585000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-585000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-585000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-585000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-585000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-585000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-585000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-585000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-585000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-585000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-585000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-585000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-585000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-585000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-585000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-585000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-585000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-585000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-585000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-585000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-585000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-585000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-585000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-585000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-585000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-585000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-585000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-585000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-585000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-585000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-585000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-585000"

                                                
                                                
----------------------- debugLogs end: cilium-585000 [took: 2.380070792s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-585000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-585000
--- SKIP: TestNetworkPlugins/group/cilium (2.51s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-803000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-803000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard