Test Report: QEMU_macOS 17223

                    
                      f9ecce707d93fa4241f904962674ddf295a62997:2023-09-11:30961
                    
                

Test fail (87/244)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 14.35
7 TestDownloadOnly/v1.16.0/kubectl 0
20 TestOffline 9.92
22 TestAddons/Setup 46.27
23 TestCertOptions 9.92
24 TestCertExpiration 195.14
25 TestDockerFlags 9.97
26 TestForceSystemdFlag 10.55
27 TestForceSystemdEnv 10
72 TestFunctional/parallel/ServiceCmdConnect 41.98
76 TestFunctional/parallel/SSHCmd 1.13
139 TestImageBuild/serial/BuildWithBuildArg 1.05
148 TestIngressAddonLegacy/serial/ValidateIngressAddons 57.05
183 TestMountStart/serial/StartWithMountFirst 9.98
186 TestMultiNode/serial/FreshStart2Nodes 10.11
187 TestMultiNode/serial/DeployApp2Nodes 99.24
188 TestMultiNode/serial/PingHostFrom2Pods 0.08
189 TestMultiNode/serial/AddNode 0.07
190 TestMultiNode/serial/ProfileList 0.1
191 TestMultiNode/serial/CopyFile 0.06
192 TestMultiNode/serial/StopNode 0.13
193 TestMultiNode/serial/StartAfterStop 0.1
194 TestMultiNode/serial/RestartKeepsNodes 5.36
195 TestMultiNode/serial/DeleteNode 0.1
196 TestMultiNode/serial/StopMultiNode 0.15
197 TestMultiNode/serial/RestartMultiNode 5.25
198 TestMultiNode/serial/ValidateNameConflict 19.84
202 TestPreload 10.01
204 TestScheduledStopUnix 9.81
205 TestSkaffold 11.81
208 TestRunningBinaryUpgrade 159.49
210 TestKubernetesUpgrade 15.43
223 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.93
224 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.65
225 TestStoppedBinaryUpgrade/Setup 156.94
227 TestPause/serial/Start 9.88
237 TestNoKubernetes/serial/StartWithK8s 9.85
238 TestNoKubernetes/serial/StartWithStopK8s 5.3
239 TestNoKubernetes/serial/Start 5.31
243 TestNoKubernetes/serial/StartNoArgs 5.31
245 TestNetworkPlugins/group/auto/Start 9.69
246 TestNetworkPlugins/group/kindnet/Start 9.74
247 TestNetworkPlugins/group/calico/Start 9.75
248 TestNetworkPlugins/group/custom-flannel/Start 9.93
249 TestNetworkPlugins/group/false/Start 9.9
250 TestNetworkPlugins/group/enable-default-cni/Start 9.7
251 TestNetworkPlugins/group/flannel/Start 9.72
252 TestNetworkPlugins/group/bridge/Start 9.78
253 TestNetworkPlugins/group/kubenet/Start 9.92
255 TestStartStop/group/old-k8s-version/serial/FirstStart 9.9
256 TestStoppedBinaryUpgrade/Upgrade 2.58
257 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
258 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
261 TestStoppedBinaryUpgrade/MinikubeLogs 0.08
262 TestStartStop/group/old-k8s-version/serial/SecondStart 5.29
264 TestStartStop/group/no-preload/serial/FirstStart 10.48
265 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
266 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
267 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
268 TestStartStop/group/old-k8s-version/serial/Pause 0.1
270 TestStartStop/group/embed-certs/serial/FirstStart 9.86
271 TestStartStop/group/no-preload/serial/DeployApp 0.09
272 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
275 TestStartStop/group/no-preload/serial/SecondStart 5.21
276 TestStartStop/group/embed-certs/serial/DeployApp 0.09
277 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
280 TestStartStop/group/embed-certs/serial/SecondStart 5.24
281 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
282 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.05
283 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
284 TestStartStop/group/no-preload/serial/Pause 0.1
286 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.8
287 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
288 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.05
289 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
290 TestStartStop/group/embed-certs/serial/Pause 0.1
292 TestStartStop/group/newest-cni/serial/FirstStart 9.82
293 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
294 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
297 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.24
302 TestStartStop/group/newest-cni/serial/SecondStart 5.24
303 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
304 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.05
305 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
306 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
309 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
310 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (14.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-412000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-412000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (14.351642s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b110ffef-607d-4394-8591-fb3c86d183b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-412000] minikube v1.31.2 on Darwin 13.5.2 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"be4bd383-4e91-4552-8160-ef3538b3daf4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17223"}}
	{"specversion":"1.0","id":"43c91e7b-0bf7-4efb-bdd2-8fb0bb5393d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig"}}
	{"specversion":"1.0","id":"a4c2d7e9-bcb9-4b7e-a863-3a96fc44350d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"ae70d1db-f3a9-4830-bd42-928c02250d33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cbfaf608-5e4a-4650-ae6a-cfde4b42d065","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube"}}
	{"specversion":"1.0","id":"f17ef8c5-7aa8-49fd-9aea-20806faf147f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"07b13bed-683a-45bd-9890-f9f02c832ce1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1a314e04-7806-40ce-8701-9f3bdc04f5b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"5f1dd2f9-5005-49e5-bbe0-22dbfb1a1355","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b8572328-332b-419b-b37d-7b39e0735492","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-412000 in cluster download-only-412000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4b4d28e3-06f1-4b51-b20d-3636ff611d04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c8c60e0d-a9fa-4d2f-91a6-1a9325978e1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10630df68 0x10630df68 0x10630df68 0x10630df68 0x10630df68 0x10630df68 0x10630df68] Decompressors:map[bz2:0x14000057da8 gz:0x14000057e00 tar:0x14000057db0 tar.bz2:0x14000057dc0 tar.gz:0x14000057dd0 tar.xz:0x14000057de0 tar.zst:0x14000057df0 tbz2:0x14000057dc0 tgz:0x140000
57dd0 txz:0x14000057de0 tzst:0x14000057df0 xz:0x14000057e08 zip:0x14000057e10 zst:0x14000057e20] Getters:map[file:0x1400019eca0 http:0x14000f02140 https:0x14000f02190] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"21cb3488-cd8c-4d2c-9cb7-57c10b9ef295","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 03:53:37.558047    1567 out.go:296] Setting OutFile to fd 1 ...
	I0911 03:53:37.558188    1567 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:53:37.558191    1567 out.go:309] Setting ErrFile to fd 2...
	I0911 03:53:37.558193    1567 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:53:37.558298    1567 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	W0911 03:53:37.558368    1567 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17223-1124/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17223-1124/.minikube/config/config.json: no such file or directory
	I0911 03:53:37.559541    1567 out.go:303] Setting JSON to true
	I0911 03:53:37.575956    1567 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1391,"bootTime":1694428226,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 03:53:37.576032    1567 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 03:53:37.584502    1567 out.go:97] [download-only-412000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 03:53:37.588453    1567 out.go:169] MINIKUBE_LOCATION=17223
	W0911 03:53:37.584644    1567 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball: no such file or directory
	I0911 03:53:37.584679    1567 notify.go:220] Checking for updates...
	I0911 03:53:37.598457    1567 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 03:53:37.601486    1567 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 03:53:37.604474    1567 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 03:53:37.607477    1567 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	W0911 03:53:37.611988    1567 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0911 03:53:37.612224    1567 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 03:53:37.617433    1567 out.go:97] Using the qemu2 driver based on user configuration
	I0911 03:53:37.617438    1567 start.go:298] selected driver: qemu2
	I0911 03:53:37.617450    1567 start.go:902] validating driver "qemu2" against <nil>
	I0911 03:53:37.617492    1567 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 03:53:37.620383    1567 out.go:169] Automatically selected the socket_vmnet network
	I0911 03:53:37.626920    1567 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0911 03:53:37.626999    1567 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0911 03:53:37.627083    1567 cni.go:84] Creating CNI manager for ""
	I0911 03:53:37.627098    1567 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0911 03:53:37.627103    1567 start_flags.go:321] config:
	{Name:download-only-412000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-412000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 03:53:37.632211    1567 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 03:53:37.636610    1567 out.go:97] Downloading VM boot image ...
	I0911 03:53:37.636746    1567 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso
	I0911 03:53:43.130618    1567 out.go:97] Starting control plane node download-only-412000 in cluster download-only-412000
	I0911 03:53:43.130642    1567 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0911 03:53:43.185800    1567 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0911 03:53:43.185889    1567 cache.go:57] Caching tarball of preloaded images
	I0911 03:53:43.186066    1567 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0911 03:53:43.189662    1567 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0911 03:53:43.189671    1567 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0911 03:53:43.270213    1567 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0911 03:53:50.843206    1567 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0911 03:53:50.843359    1567 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0911 03:53:51.485758    1567 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0911 03:53:51.485950    1567 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/download-only-412000/config.json ...
	I0911 03:53:51.485972    1567 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/download-only-412000/config.json: {Name:mk93908f6e70cc7147706f6edb9295b5967f3765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:53:51.486204    1567 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0911 03:53:51.486379    1567 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0911 03:53:51.834950    1567 out.go:169] 
	W0911 03:53:51.841140    1567 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10630df68 0x10630df68 0x10630df68 0x10630df68 0x10630df68 0x10630df68 0x10630df68] Decompressors:map[bz2:0x14000057da8 gz:0x14000057e00 tar:0x14000057db0 tar.bz2:0x14000057dc0 tar.gz:0x14000057dd0 tar.xz:0x14000057de0 tar.zst:0x14000057df0 tbz2:0x14000057dc0 tgz:0x14000057dd0 txz:0x14000057de0 tzst:0x14000057df0 xz:0x14000057e08 zip:0x14000057e10 zst:0x14000057e20] Getters:map[file:0x1400019eca0 http:0x14000f02140 https:0x14000f02190] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0911 03:53:51.841165    1567 out_reason.go:110] 
	W0911 03:53:51.848024    1567 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 03:53:51.852024    1567 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-412000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (14.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.92s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-891000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-891000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.749173042s)

                                                
                                                
-- stdout --
	* [offline-docker-891000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-891000 in cluster offline-docker-891000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-891000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:07:47.387653    3090 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:07:47.387778    3090 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:07:47.387781    3090 out.go:309] Setting ErrFile to fd 2...
	I0911 04:07:47.387783    3090 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:07:47.387907    3090 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:07:47.388942    3090 out.go:303] Setting JSON to false
	I0911 04:07:47.405306    3090 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2241,"bootTime":1694428226,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:07:47.405381    3090 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:07:47.410031    3090 out.go:177] * [offline-docker-891000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:07:47.417993    3090 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:07:47.421951    3090 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:07:47.418070    3090 notify.go:220] Checking for updates...
	I0911 04:07:47.425906    3090 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:07:47.428964    3090 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:07:47.432020    3090 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:07:47.434973    3090 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:07:47.439068    3090 config.go:182] Loaded profile config "multinode-479000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:07:47.439149    3090 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:07:47.442900    3090 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:07:47.449950    3090 start.go:298] selected driver: qemu2
	I0911 04:07:47.449958    3090 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:07:47.449971    3090 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:07:47.451874    3090 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:07:47.456009    3090 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:07:47.459007    3090 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:07:47.459031    3090 cni.go:84] Creating CNI manager for ""
	I0911 04:07:47.459038    3090 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:07:47.459041    3090 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:07:47.459047    3090 start_flags.go:321] config:
	{Name:offline-docker-891000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:offline-docker-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:07:47.463165    3090 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:07:47.470893    3090 out.go:177] * Starting control plane node offline-docker-891000 in cluster offline-docker-891000
	I0911 04:07:47.474943    3090 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:07:47.474983    3090 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:07:47.474998    3090 cache.go:57] Caching tarball of preloaded images
	I0911 04:07:47.475083    3090 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:07:47.475088    3090 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:07:47.475153    3090 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/offline-docker-891000/config.json ...
	I0911 04:07:47.475164    3090 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/offline-docker-891000/config.json: {Name:mk047e85b0f094ef456ef647afbb4726c5b071f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:07:47.475343    3090 start.go:365] acquiring machines lock for offline-docker-891000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:07:47.475376    3090 start.go:369] acquired machines lock for "offline-docker-891000" in 21.708µs
	I0911 04:07:47.475386    3090 start.go:93] Provisioning new machine with config: &{Name:offline-docker-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:offline-docker-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:07:47.475434    3090 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:07:47.478968    3090 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0911 04:07:47.492782    3090 start.go:159] libmachine.API.Create for "offline-docker-891000" (driver="qemu2")
	I0911 04:07:47.492807    3090 client.go:168] LocalClient.Create starting
	I0911 04:07:47.492876    3090 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:07:47.492903    3090 main.go:141] libmachine: Decoding PEM data...
	I0911 04:07:47.492914    3090 main.go:141] libmachine: Parsing certificate...
	I0911 04:07:47.492957    3090 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:07:47.492975    3090 main.go:141] libmachine: Decoding PEM data...
	I0911 04:07:47.492983    3090 main.go:141] libmachine: Parsing certificate...
	I0911 04:07:47.493319    3090 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:07:47.611481    3090 main.go:141] libmachine: Creating SSH key...
	I0911 04:07:47.727254    3090 main.go:141] libmachine: Creating Disk image...
	I0911 04:07:47.727263    3090 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:07:47.727559    3090 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/offline-docker-891000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/offline-docker-891000/disk.qcow2
	I0911 04:07:47.741093    3090 main.go:141] libmachine: STDOUT: 
	I0911 04:07:47.741106    3090 main.go:141] libmachine: STDERR: 
	I0911 04:07:47.741159    3090 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/offline-docker-891000/disk.qcow2 +20000M
	I0911 04:07:47.748817    3090 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:07:47.748836    3090 main.go:141] libmachine: STDERR: 
	I0911 04:07:47.748861    3090 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/offline-docker-891000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/offline-docker-891000/disk.qcow2
	I0911 04:07:47.748868    3090 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:07:47.748907    3090 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/offline-docker-891000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/offline-docker-891000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/offline-docker-891000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:62:5b:1c:1d:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/offline-docker-891000/disk.qcow2
	I0911 04:07:47.750804    3090 main.go:141] libmachine: STDOUT: 
	I0911 04:07:47.750818    3090 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:07:47.750837    3090 client.go:171] LocalClient.Create took 258.031458ms
	I0911 04:07:49.752839    3090 start.go:128] duration metric: createHost completed in 2.277466958s
	I0911 04:07:49.752881    3090 start.go:83] releasing machines lock for "offline-docker-891000", held for 2.277560166s
	W0911 04:07:49.752907    3090 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:07:49.768367    3090 out.go:177] * Deleting "offline-docker-891000" in qemu2 ...
	W0911 04:07:49.775203    3090 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:07:49.775211    3090 start.go:687] Will try again in 5 seconds ...
	I0911 04:07:54.777316    3090 start.go:365] acquiring machines lock for offline-docker-891000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:07:54.777620    3090 start.go:369] acquired machines lock for "offline-docker-891000" in 243.208µs
	I0911 04:07:54.777714    3090 start.go:93] Provisioning new machine with config: &{Name:offline-docker-891000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:offline-docker-891000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:07:54.777973    3090 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:07:54.785228    3090 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0911 04:07:54.821109    3090 start.go:159] libmachine.API.Create for "offline-docker-891000" (driver="qemu2")
	I0911 04:07:54.821145    3090 client.go:168] LocalClient.Create starting
	I0911 04:07:54.821259    3090 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:07:54.821313    3090 main.go:141] libmachine: Decoding PEM data...
	I0911 04:07:54.821325    3090 main.go:141] libmachine: Parsing certificate...
	I0911 04:07:54.821386    3090 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:07:54.821416    3090 main.go:141] libmachine: Decoding PEM data...
	I0911 04:07:54.821425    3090 main.go:141] libmachine: Parsing certificate...
	I0911 04:07:54.821884    3090 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:07:54.951716    3090 main.go:141] libmachine: Creating SSH key...
	I0911 04:07:55.054513    3090 main.go:141] libmachine: Creating Disk image...
	I0911 04:07:55.054519    3090 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:07:55.054700    3090 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/offline-docker-891000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/offline-docker-891000/disk.qcow2
	I0911 04:07:55.063278    3090 main.go:141] libmachine: STDOUT: 
	I0911 04:07:55.063288    3090 main.go:141] libmachine: STDERR: 
	I0911 04:07:55.063334    3090 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/offline-docker-891000/disk.qcow2 +20000M
	I0911 04:07:55.070463    3090 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:07:55.070473    3090 main.go:141] libmachine: STDERR: 
	I0911 04:07:55.070486    3090 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/offline-docker-891000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/offline-docker-891000/disk.qcow2
	I0911 04:07:55.070492    3090 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:07:55.070520    3090 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/offline-docker-891000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/offline-docker-891000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/offline-docker-891000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:cc:cb:82:a8:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/offline-docker-891000/disk.qcow2
	I0911 04:07:55.071998    3090 main.go:141] libmachine: STDOUT: 
	I0911 04:07:55.072008    3090 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:07:55.072019    3090 client.go:171] LocalClient.Create took 250.87625ms
	I0911 04:07:57.074135    3090 start.go:128] duration metric: createHost completed in 2.296203667s
	I0911 04:07:57.074227    3090 start.go:83] releasing machines lock for "offline-docker-891000", held for 2.296662959s
	W0911 04:07:57.074690    3090 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-891000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:07:57.082203    3090 out.go:177] 
	W0911 04:07:57.086225    3090 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:07:57.086256    3090 out.go:239] * 
	* 
	W0911 04:07:57.088746    3090 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:07:57.097174    3090 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-891000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:522: *** TestOffline FAILED at 2023-09-11 04:07:57.114422 -0700 PDT m=+859.688792542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-891000 -n offline-docker-891000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-891000 -n offline-docker-891000: exit status 7 (65.0565ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-891000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-891000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-891000
--- FAIL: TestOffline (9.92s)

                                                
                                    
x
+
TestAddons/Setup (46.27s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-136000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-136000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (46.267912208s)

                                                
                                                
-- stdout --
	* [addons-136000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node addons-136000 in cluster addons-136000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.28.1 on Docker 24.0.5 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying ingress addon...
	
	* Verifying csi-hostpath-driver addon...
	  - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	* Verifying registry addon...
	  - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.9
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 03:54:02.291257    1636 out.go:296] Setting OutFile to fd 1 ...
	I0911 03:54:02.291377    1636 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:54:02.291379    1636 out.go:309] Setting ErrFile to fd 2...
	I0911 03:54:02.291382    1636 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:54:02.291483    1636 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 03:54:02.292486    1636 out.go:303] Setting JSON to false
	I0911 03:54:02.307595    1636 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1416,"bootTime":1694428226,"procs":417,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 03:54:02.307669    1636 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 03:54:02.313111    1636 out.go:177] * [addons-136000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 03:54:02.320087    1636 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 03:54:02.324105    1636 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 03:54:02.320129    1636 notify.go:220] Checking for updates...
	I0911 03:54:02.327066    1636 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 03:54:02.330059    1636 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 03:54:02.333118    1636 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 03:54:02.336086    1636 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 03:54:02.339237    1636 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 03:54:02.343085    1636 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 03:54:02.350014    1636 start.go:298] selected driver: qemu2
	I0911 03:54:02.350019    1636 start.go:902] validating driver "qemu2" against <nil>
	I0911 03:54:02.350024    1636 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 03:54:02.351941    1636 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 03:54:02.355147    1636 out.go:177] * Automatically selected the socket_vmnet network
	I0911 03:54:02.356440    1636 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 03:54:02.356461    1636 cni.go:84] Creating CNI manager for ""
	I0911 03:54:02.356466    1636 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 03:54:02.356470    1636 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 03:54:02.356474    1636 start_flags.go:321] config:
	{Name:addons-136000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-136000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s}
	I0911 03:54:02.360709    1636 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 03:54:02.369052    1636 out.go:177] * Starting control plane node addons-136000 in cluster addons-136000
	I0911 03:54:02.373022    1636 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 03:54:02.373087    1636 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 03:54:02.373099    1636 cache.go:57] Caching tarball of preloaded images
	I0911 03:54:02.373147    1636 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 03:54:02.373152    1636 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 03:54:02.373324    1636 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/config.json ...
	I0911 03:54:02.373339    1636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/config.json: {Name:mk9aad42b7e9c885f46ce7d2844e268582832189 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:54:02.373577    1636 start.go:365] acquiring machines lock for addons-136000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 03:54:02.373671    1636 start.go:369] acquired machines lock for "addons-136000" in 86.666µs
	I0911 03:54:02.373683    1636 start.go:93] Provisioning new machine with config: &{Name:addons-136000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:addons-136000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 03:54:02.373708    1636 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 03:54:02.382073    1636 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0911 03:54:02.732945    1636 start.go:159] libmachine.API.Create for "addons-136000" (driver="qemu2")
	I0911 03:54:02.733007    1636 client.go:168] LocalClient.Create starting
	I0911 03:54:02.733192    1636 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 03:54:02.841640    1636 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 03:54:03.140646    1636 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 03:54:03.345216    1636 main.go:141] libmachine: Creating SSH key...
	I0911 03:54:03.451511    1636 main.go:141] libmachine: Creating Disk image...
	I0911 03:54:03.451517    1636 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 03:54:03.452377    1636 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/addons-136000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/addons-136000/disk.qcow2
	I0911 03:54:03.486651    1636 main.go:141] libmachine: STDOUT: 
	I0911 03:54:03.486672    1636 main.go:141] libmachine: STDERR: 
	I0911 03:54:03.486741    1636 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/addons-136000/disk.qcow2 +20000M
	I0911 03:54:03.494182    1636 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 03:54:03.494204    1636 main.go:141] libmachine: STDERR: 
	I0911 03:54:03.494219    1636 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/addons-136000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/addons-136000/disk.qcow2
	I0911 03:54:03.494225    1636 main.go:141] libmachine: Starting QEMU VM...
	I0911 03:54:03.494264    1636 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/addons-136000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/addons-136000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/addons-136000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:8d:15:a0:6f:df -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/addons-136000/disk.qcow2
	I0911 03:54:03.565304    1636 main.go:141] libmachine: STDOUT: 
	I0911 03:54:03.565335    1636 main.go:141] libmachine: STDERR: 
	I0911 03:54:03.565340    1636 main.go:141] libmachine: Attempt 0
	I0911 03:54:03.565356    1636 main.go:141] libmachine: Searching for 1a:8d:15:a0:6f:df in /var/db/dhcpd_leases ...
	I0911 03:54:05.567833    1636 main.go:141] libmachine: Attempt 1
	I0911 03:54:05.567915    1636 main.go:141] libmachine: Searching for 1a:8d:15:a0:6f:df in /var/db/dhcpd_leases ...
	I0911 03:54:07.570099    1636 main.go:141] libmachine: Attempt 2
	I0911 03:54:07.570125    1636 main.go:141] libmachine: Searching for 1a:8d:15:a0:6f:df in /var/db/dhcpd_leases ...
	I0911 03:54:09.572179    1636 main.go:141] libmachine: Attempt 3
	I0911 03:54:09.572201    1636 main.go:141] libmachine: Searching for 1a:8d:15:a0:6f:df in /var/db/dhcpd_leases ...
	I0911 03:54:11.574302    1636 main.go:141] libmachine: Attempt 4
	I0911 03:54:11.574329    1636 main.go:141] libmachine: Searching for 1a:8d:15:a0:6f:df in /var/db/dhcpd_leases ...
	I0911 03:54:13.575408    1636 main.go:141] libmachine: Attempt 5
	I0911 03:54:13.575444    1636 main.go:141] libmachine: Searching for 1a:8d:15:a0:6f:df in /var/db/dhcpd_leases ...
	I0911 03:54:15.577500    1636 main.go:141] libmachine: Attempt 6
	I0911 03:54:15.577528    1636 main.go:141] libmachine: Searching for 1a:8d:15:a0:6f:df in /var/db/dhcpd_leases ...
	I0911 03:54:15.577654    1636 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0911 03:54:15.577687    1636 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:8d:15:a0:6f:df ID:1,1a:8d:15:a0:6f:df Lease:0x65004356}
	I0911 03:54:15.577691    1636 main.go:141] libmachine: Found match: 1a:8d:15:a0:6f:df
	I0911 03:54:15.577706    1636 main.go:141] libmachine: IP: 192.168.105.2
	I0911 03:54:15.577710    1636 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0911 03:54:16.584359    1636 machine.go:88] provisioning docker machine ...
	I0911 03:54:16.584399    1636 buildroot.go:166] provisioning hostname "addons-136000"
	I0911 03:54:16.586060    1636 main.go:141] libmachine: Using SSH client type: native
	I0911 03:54:16.586474    1636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028e23b0] 0x1028e4e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0911 03:54:16.586482    1636 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-136000 && echo "addons-136000" | sudo tee /etc/hostname
	I0911 03:54:16.612598    1636 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0911 03:54:19.735157    1636 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-136000
	
	I0911 03:54:19.735301    1636 main.go:141] libmachine: Using SSH client type: native
	I0911 03:54:19.735890    1636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028e23b0] 0x1028e4e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0911 03:54:19.735906    1636 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-136000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-136000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-136000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 03:54:19.823084    1636 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 03:54:19.823102    1636 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17223-1124/.minikube CaCertPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17223-1124/.minikube}
	I0911 03:54:19.823116    1636 buildroot.go:174] setting up certificates
	I0911 03:54:19.823126    1636 provision.go:83] configureAuth start
	I0911 03:54:19.823132    1636 provision.go:138] copyHostCerts
	I0911 03:54:19.823338    1636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.pem (1078 bytes)
	I0911 03:54:19.823801    1636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17223-1124/.minikube/cert.pem (1123 bytes)
	I0911 03:54:19.823965    1636 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17223-1124/.minikube/key.pem (1679 bytes)
	I0911 03:54:19.824096    1636 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca-key.pem org=jenkins.addons-136000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-136000]
	I0911 03:54:19.933626    1636 provision.go:172] copyRemoteCerts
	I0911 03:54:19.933685    1636 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 03:54:19.933694    1636 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/addons-136000/id_rsa Username:docker}
	I0911 03:54:19.971345    1636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0911 03:54:19.978347    1636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0911 03:54:19.985195    1636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 03:54:19.992601    1636 provision.go:86] duration metric: configureAuth took 169.474417ms
	I0911 03:54:19.992609    1636 buildroot.go:189] setting minikube options for container-runtime
	I0911 03:54:19.992709    1636 config.go:182] Loaded profile config "addons-136000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 03:54:19.992744    1636 main.go:141] libmachine: Using SSH client type: native
	I0911 03:54:19.992971    1636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028e23b0] 0x1028e4e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0911 03:54:19.992979    1636 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0911 03:54:20.060544    1636 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0911 03:54:20.060566    1636 buildroot.go:70] root file system type: tmpfs
	I0911 03:54:20.060621    1636 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0911 03:54:20.060665    1636 main.go:141] libmachine: Using SSH client type: native
	I0911 03:54:20.060902    1636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028e23b0] 0x1028e4e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0911 03:54:20.060941    1636 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0911 03:54:20.134173    1636 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0911 03:54:20.134225    1636 main.go:141] libmachine: Using SSH client type: native
	I0911 03:54:20.134488    1636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028e23b0] 0x1028e4e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0911 03:54:20.134498    1636 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0911 03:54:20.468768    1636 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0911 03:54:20.468782    1636 machine.go:91] provisioned docker machine in 3.884510584s
	I0911 03:54:20.468788    1636 client.go:171] LocalClient.Create took 17.736225s
	I0911 03:54:20.468804    1636 start.go:167] duration metric: libmachine.API.Create for "addons-136000" took 17.736319125s
	I0911 03:54:20.468809    1636 start.go:300] post-start starting for "addons-136000" (driver="qemu2")
	I0911 03:54:20.468813    1636 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 03:54:20.468886    1636 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 03:54:20.468896    1636 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/addons-136000/id_rsa Username:docker}
	I0911 03:54:20.505691    1636 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 03:54:20.507231    1636 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 03:54:20.507243    1636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17223-1124/.minikube/addons for local assets ...
	I0911 03:54:20.507318    1636 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17223-1124/.minikube/files for local assets ...
	I0911 03:54:20.507350    1636 start.go:303] post-start completed in 38.538583ms
	I0911 03:54:20.507713    1636 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/config.json ...
	I0911 03:54:20.507887    1636 start.go:128] duration metric: createHost completed in 18.134636333s
	I0911 03:54:20.507908    1636 main.go:141] libmachine: Using SSH client type: native
	I0911 03:54:20.508138    1636 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028e23b0] 0x1028e4e10 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0911 03:54:20.508143    1636 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0911 03:54:20.575628    1636 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694429660.612391336
	
	I0911 03:54:20.575638    1636 fix.go:206] guest clock: 1694429660.612391336
	I0911 03:54:20.575641    1636 fix.go:219] Guest: 2023-09-11 03:54:20.612391336 -0700 PDT Remote: 2023-09-11 03:54:20.50789 -0700 PDT m=+18.235853334 (delta=104.501336ms)
	I0911 03:54:20.575651    1636 fix.go:190] guest clock delta is within tolerance: 104.501336ms
	I0911 03:54:20.575654    1636 start.go:83] releasing machines lock for "addons-136000", held for 18.20244025s
	I0911 03:54:20.575911    1636 ssh_runner.go:195] Run: cat /version.json
	I0911 03:54:20.575919    1636 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/addons-136000/id_rsa Username:docker}
	I0911 03:54:20.575924    1636 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 03:54:20.575959    1636 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/addons-136000/id_rsa Username:docker}
	I0911 03:54:20.614968    1636 ssh_runner.go:195] Run: systemctl --version
	I0911 03:54:20.657537    1636 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 03:54:20.659433    1636 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 03:54:20.659467    1636 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 03:54:20.664722    1636 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 03:54:20.664730    1636 start.go:466] detecting cgroup driver to use...
	I0911 03:54:20.664845    1636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 03:54:20.670457    1636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0911 03:54:20.673500    1636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0911 03:54:20.676817    1636 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0911 03:54:20.676841    1636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0911 03:54:20.680268    1636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0911 03:54:20.685476    1636 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0911 03:54:20.689989    1636 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0911 03:54:20.693960    1636 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 03:54:20.698606    1636 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0911 03:54:20.701852    1636 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 03:54:20.705166    1636 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 03:54:20.711144    1636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:54:20.773859    1636 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0911 03:54:20.780574    1636 start.go:466] detecting cgroup driver to use...
	I0911 03:54:20.780640    1636 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0911 03:54:20.785804    1636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 03:54:20.790799    1636 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 03:54:20.796269    1636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 03:54:20.801044    1636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0911 03:54:20.806164    1636 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0911 03:54:20.836457    1636 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0911 03:54:20.841903    1636 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 03:54:20.847265    1636 ssh_runner.go:195] Run: which cri-dockerd
	I0911 03:54:20.848497    1636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0911 03:54:20.851556    1636 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0911 03:54:20.856201    1636 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0911 03:54:20.917394    1636 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0911 03:54:20.975898    1636 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0911 03:54:20.975912    1636 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0911 03:54:20.981220    1636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:54:21.042300    1636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0911 03:54:22.196246    1636 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.153958917s)
	I0911 03:54:22.196300    1636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0911 03:54:22.258123    1636 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0911 03:54:22.336077    1636 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0911 03:54:22.397803    1636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:54:22.466746    1636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0911 03:54:22.473208    1636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:54:22.537711    1636 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0911 03:54:22.561296    1636 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0911 03:54:22.561767    1636 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0911 03:54:22.563915    1636 start.go:534] Will wait 60s for crictl version
	I0911 03:54:22.563957    1636 ssh_runner.go:195] Run: which crictl
	I0911 03:54:22.565407    1636 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 03:54:22.580013    1636 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.5
	RuntimeApiVersion:  v1alpha2
	I0911 03:54:22.580113    1636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0911 03:54:22.589826    1636 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0911 03:54:22.601516    1636 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.5 ...
	I0911 03:54:22.601635    1636 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0911 03:54:22.603062    1636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 03:54:22.606723    1636 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 03:54:22.606771    1636 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0911 03:54:22.611645    1636 docker.go:636] Got preloaded images: 
	I0911 03:54:22.611653    1636 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.1 wasn't preloaded
	I0911 03:54:22.611692    1636 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0911 03:54:22.614439    1636 ssh_runner.go:195] Run: which lz4
	I0911 03:54:22.615559    1636 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0911 03:54:22.616701    1636 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 03:54:22.616715    1636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356902558 bytes)
	I0911 03:54:23.920389    1636 docker.go:600] Took 1.304898 seconds to copy over tarball
	I0911 03:54:23.920464    1636 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 03:54:24.965929    1636 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.045467416s)
	I0911 03:54:24.965945    1636 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 03:54:24.982302    1636 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0911 03:54:24.985308    1636 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0911 03:54:24.990587    1636 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:54:25.065506    1636 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0911 03:54:26.692696    1636 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.62721625s)
	I0911 03:54:26.692801    1636 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0911 03:54:26.698808    1636 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0911 03:54:26.698820    1636 cache_images.go:84] Images are preloaded, skipping loading
	I0911 03:54:26.698885    1636 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0911 03:54:26.706709    1636 cni.go:84] Creating CNI manager for ""
	I0911 03:54:26.706721    1636 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 03:54:26.706748    1636 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 03:54:26.706757    1636 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-136000 NodeName:addons-136000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 03:54:26.706838    1636 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-136000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 03:54:26.706872    1636 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-136000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-136000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 03:54:26.706933    1636 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 03:54:26.709804    1636 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 03:54:26.709837    1636 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 03:54:26.712742    1636 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0911 03:54:26.717820    1636 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 03:54:26.722854    1636 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0911 03:54:26.728162    1636 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0911 03:54:26.729441    1636 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 03:54:26.732976    1636 certs.go:56] Setting up /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000 for IP: 192.168.105.2
	I0911 03:54:26.732985    1636 certs.go:190] acquiring lock for shared ca certs: {Name:mk38c09806021c18792511eb48bf232ccb80ec29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:54:26.733125    1636 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.key
	I0911 03:54:26.823494    1636 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.crt ...
	I0911 03:54:26.823498    1636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.crt: {Name:mk46a4e966ff97d683af8725037ee5d33ef1384f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:54:26.823691    1636 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.key ...
	I0911 03:54:26.823694    1636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.key: {Name:mka526c4f37d101de012cb6349c7354108395f7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:54:26.823801    1636 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17223-1124/.minikube/proxy-client-ca.key
	I0911 03:54:26.945274    1636 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17223-1124/.minikube/proxy-client-ca.crt ...
	I0911 03:54:26.945278    1636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/proxy-client-ca.crt: {Name:mk41a7333719f89b083a4a528f06962af436e0e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:54:26.945484    1636 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17223-1124/.minikube/proxy-client-ca.key ...
	I0911 03:54:26.945487    1636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/proxy-client-ca.key: {Name:mke0ffd569bec48f9be4d26dc58fdc349df7bfc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:54:26.945634    1636 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/client.key
	I0911 03:54:26.945640    1636 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/client.crt with IP's: []
	I0911 03:54:27.019733    1636 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/client.crt ...
	I0911 03:54:27.019740    1636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/client.crt: {Name:mk6bbd654d397a202330e1a5c3b907bfe52f64a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:54:27.019967    1636 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/client.key ...
	I0911 03:54:27.019970    1636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/client.key: {Name:mkec804dd15ed8cfaaff9f301ff2c7cbc851290a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:54:27.020077    1636 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/apiserver.key.96055969
	I0911 03:54:27.020086    1636 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0911 03:54:27.132287    1636 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/apiserver.crt.96055969 ...
	I0911 03:54:27.132300    1636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/apiserver.crt.96055969: {Name:mkebb98a4ff9d77454b2fb2664f2d309a5aab2ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:54:27.132439    1636 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/apiserver.key.96055969 ...
	I0911 03:54:27.132442    1636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/apiserver.key.96055969: {Name:mka78c972a57f45c0e20321a3f137d9ec7b63430 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:54:27.132541    1636 certs.go:337] copying /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/apiserver.crt
	I0911 03:54:27.132752    1636 certs.go:341] copying /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/apiserver.key
	I0911 03:54:27.132861    1636 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/proxy-client.key
	I0911 03:54:27.132872    1636 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/proxy-client.crt with IP's: []
	I0911 03:54:27.333051    1636 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/proxy-client.crt ...
	I0911 03:54:27.333057    1636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/proxy-client.crt: {Name:mk20cbd8b65f52cfdcb3ec426472af4ed2cec631 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:54:27.333272    1636 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/proxy-client.key ...
	I0911 03:54:27.333276    1636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/proxy-client.key: {Name:mkb4c08a62351fe7c7936c1b92f37c8e7d59c3e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:54:27.333662    1636 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 03:54:27.333691    1636 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem (1078 bytes)
	I0911 03:54:27.333714    1636 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem (1123 bytes)
	I0911 03:54:27.333735    1636 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/key.pem (1679 bytes)
	I0911 03:54:27.334108    1636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 03:54:27.341990    1636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 03:54:27.348804    1636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 03:54:27.356173    1636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/addons-136000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 03:54:27.363508    1636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 03:54:27.370162    1636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 03:54:27.376883    1636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 03:54:27.384119    1636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0911 03:54:27.392251    1636 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 03:54:27.399608    1636 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 03:54:27.406501    1636 ssh_runner.go:195] Run: openssl version
	I0911 03:54:27.408380    1636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 03:54:27.411303    1636 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 03:54:27.412652    1636 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:54 /usr/share/ca-certificates/minikubeCA.pem
	I0911 03:54:27.412675    1636 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 03:54:27.414465    1636 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 03:54:27.417219    1636 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 03:54:27.418566    1636 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 03:54:27.418603    1636 kubeadm.go:404] StartCluster: {Name:addons-136000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:addons-136000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 03:54:27.418672    1636 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0911 03:54:27.424462    1636 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 03:54:27.427996    1636 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 03:54:27.430799    1636 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 03:54:27.433436    1636 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 03:54:27.433450    1636 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0911 03:54:27.457353    1636 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 03:54:27.457392    1636 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 03:54:27.513018    1636 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 03:54:27.513078    1636 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 03:54:27.513123    1636 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 03:54:27.571005    1636 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 03:54:27.579199    1636 out.go:204]   - Generating certificates and keys ...
	I0911 03:54:27.579233    1636 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 03:54:27.579266    1636 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 03:54:27.644461    1636 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0911 03:54:27.722201    1636 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0911 03:54:27.797772    1636 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0911 03:54:27.863629    1636 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0911 03:54:28.039956    1636 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0911 03:54:28.040045    1636 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-136000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0911 03:54:28.183561    1636 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0911 03:54:28.183634    1636 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-136000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0911 03:54:28.274448    1636 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0911 03:54:28.467285    1636 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0911 03:54:28.666426    1636 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0911 03:54:28.666460    1636 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 03:54:28.753707    1636 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 03:54:29.060317    1636 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 03:54:29.141823    1636 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 03:54:29.240824    1636 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 03:54:29.241004    1636 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 03:54:29.242025    1636 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 03:54:29.248198    1636 out.go:204]   - Booting up control plane ...
	I0911 03:54:29.248256    1636 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 03:54:29.248307    1636 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 03:54:29.248360    1636 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 03:54:29.249409    1636 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 03:54:29.249461    1636 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 03:54:29.249482    1636 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 03:54:29.311657    1636 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 03:54:32.813416    1636 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.501849 seconds
	I0911 03:54:32.813473    1636 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 03:54:32.820791    1636 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 03:54:33.329483    1636 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 03:54:33.329596    1636 kubeadm.go:322] [mark-control-plane] Marking the node addons-136000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 03:54:33.834271    1636 kubeadm.go:322] [bootstrap-token] Using token: htznma.s4ojdlamw62o7gwa
	I0911 03:54:33.841061    1636 out.go:204]   - Configuring RBAC rules ...
	I0911 03:54:33.841108    1636 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 03:54:33.841964    1636 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 03:54:33.847613    1636 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 03:54:33.849361    1636 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 03:54:33.850497    1636 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 03:54:33.851541    1636 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 03:54:33.855675    1636 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 03:54:34.014926    1636 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 03:54:34.245249    1636 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 03:54:34.245960    1636 kubeadm.go:322] 
	I0911 03:54:34.245992    1636 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 03:54:34.245995    1636 kubeadm.go:322] 
	I0911 03:54:34.246033    1636 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 03:54:34.246042    1636 kubeadm.go:322] 
	I0911 03:54:34.246056    1636 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 03:54:34.246098    1636 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 03:54:34.246122    1636 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 03:54:34.246127    1636 kubeadm.go:322] 
	I0911 03:54:34.246152    1636 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0911 03:54:34.246154    1636 kubeadm.go:322] 
	I0911 03:54:34.246183    1636 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 03:54:34.246186    1636 kubeadm.go:322] 
	I0911 03:54:34.246224    1636 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 03:54:34.246267    1636 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 03:54:34.246304    1636 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 03:54:34.246310    1636 kubeadm.go:322] 
	I0911 03:54:34.246357    1636 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 03:54:34.246391    1636 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 03:54:34.246395    1636 kubeadm.go:322] 
	I0911 03:54:34.246441    1636 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token htznma.s4ojdlamw62o7gwa \
	I0911 03:54:34.246513    1636 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:77399ad9541b4667fda28bf9bf29366ef8ebe6fdc39d6e893157dd935cb9f38b \
	I0911 03:54:34.246524    1636 kubeadm.go:322] 	--control-plane 
	I0911 03:54:34.246528    1636 kubeadm.go:322] 
	I0911 03:54:34.246593    1636 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 03:54:34.246596    1636 kubeadm.go:322] 
	I0911 03:54:34.246632    1636 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token htznma.s4ojdlamw62o7gwa \
	I0911 03:54:34.246689    1636 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:77399ad9541b4667fda28bf9bf29366ef8ebe6fdc39d6e893157dd935cb9f38b 
	I0911 03:54:34.246740    1636 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 03:54:34.246745    1636 cni.go:84] Creating CNI manager for ""
	I0911 03:54:34.246751    1636 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 03:54:34.253872    1636 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 03:54:34.255212    1636 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 03:54:34.258280    1636 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 03:54:34.262752    1636 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 03:54:34.262811    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:34.262818    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9 minikube.k8s.io/name=addons-136000 minikube.k8s.io/updated_at=2023_09_11T03_54_34_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:34.323376    1636 ops.go:34] apiserver oom_adj: -16
	I0911 03:54:34.323388    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:34.355858    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:34.892705    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:35.392690    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:35.892691    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:36.392670    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:36.892684    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:37.392658    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:37.892707    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:38.392674    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:38.892650    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:39.392659    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:39.892627    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:40.392610    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:40.892598    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:41.392622    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:41.892563    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:42.390836    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:42.892498    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:43.392503    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:43.892496    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:44.391855    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:44.892471    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:45.392479    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:45.892495    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:46.392429    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:46.890731    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:47.392378    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:47.892368    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:48.392367    1636 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:54:48.437670    1636 kubeadm.go:1081] duration metric: took 14.175248542s to wait for elevateKubeSystemPrivileges.
	I0911 03:54:48.437683    1636 kubeadm.go:406] StartCluster complete in 21.019614625s
	I0911 03:54:48.437691    1636 settings.go:142] acquiring lock: {Name:mk1469232b3abbdcc69ed77e286fb2789adb44fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:54:48.437855    1636 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 03:54:48.438036    1636 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/kubeconfig: {Name:mk8b43c711db1489632c69fe978a061a5dcf6734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:54:48.438262    1636 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 03:54:48.438303    1636 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0911 03:54:48.438353    1636 addons.go:69] Setting volumesnapshots=true in profile "addons-136000"
	I0911 03:54:48.438361    1636 addons.go:231] Setting addon volumesnapshots=true in "addons-136000"
	I0911 03:54:48.438370    1636 addons.go:69] Setting ingress=true in profile "addons-136000"
	I0911 03:54:48.438377    1636 addons.go:231] Setting addon ingress=true in "addons-136000"
	I0911 03:54:48.438410    1636 host.go:66] Checking if "addons-136000" exists ...
	I0911 03:54:48.438413    1636 addons.go:69] Setting default-storageclass=true in profile "addons-136000"
	I0911 03:54:48.438418    1636 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-136000"
	I0911 03:54:48.438446    1636 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-136000"
	I0911 03:54:48.438465    1636 addons.go:69] Setting gcp-auth=true in profile "addons-136000"
	I0911 03:54:48.438485    1636 mustload.go:65] Loading cluster: addons-136000
	I0911 03:54:48.438517    1636 config.go:182] Loaded profile config "addons-136000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 03:54:48.438539    1636 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-136000"
	I0911 03:54:48.438553    1636 addons.go:69] Setting metrics-server=true in profile "addons-136000"
	I0911 03:54:48.438562    1636 addons.go:231] Setting addon metrics-server=true in "addons-136000"
	I0911 03:54:48.438574    1636 host.go:66] Checking if "addons-136000" exists ...
	I0911 03:54:48.438608    1636 addons.go:69] Setting ingress-dns=true in profile "addons-136000"
	I0911 03:54:48.438613    1636 addons.go:231] Setting addon ingress-dns=true in "addons-136000"
	I0911 03:54:48.438630    1636 host.go:66] Checking if "addons-136000" exists ...
	I0911 03:54:48.438669    1636 host.go:66] Checking if "addons-136000" exists ...
	W0911 03:54:48.438675    1636 host.go:54] host status for "addons-136000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/addons-136000/monitor: connect: connection refused
	W0911 03:54:48.438683    1636 addons.go:277] "addons-136000" is not running, setting volumesnapshots=true and skipping enablement (err=<nil>)
	I0911 03:54:48.438410    1636 host.go:66] Checking if "addons-136000" exists ...
	I0911 03:54:48.438716    1636 config.go:182] Loaded profile config "addons-136000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 03:54:48.438802    1636 addons.go:69] Setting inspektor-gadget=true in profile "addons-136000"
	I0911 03:54:48.438807    1636 addons.go:231] Setting addon inspektor-gadget=true in "addons-136000"
	I0911 03:54:48.438809    1636 addons.go:69] Setting registry=true in profile "addons-136000"
	I0911 03:54:48.438822    1636 host.go:66] Checking if "addons-136000" exists ...
	W0911 03:54:48.438838    1636 host.go:54] host status for "addons-136000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/addons-136000/monitor: connect: connection refused
	W0911 03:54:48.438844    1636 addons.go:277] "addons-136000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	I0911 03:54:48.438877    1636 addons.go:231] Setting addon registry=true in "addons-136000"
	W0911 03:54:48.438895    1636 host.go:54] host status for "addons-136000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/addons-136000/monitor: connect: connection refused
	W0911 03:54:48.438900    1636 addons.go:277] "addons-136000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0911 03:54:48.438902    1636 addons.go:467] Verifying addon ingress=true in "addons-136000"
	I0911 03:54:48.438919    1636 host.go:66] Checking if "addons-136000" exists ...
	I0911 03:54:48.442871    1636 out.go:177] * Verifying ingress addon...
	I0911 03:54:48.438453    1636 addons.go:69] Setting cloud-spanner=true in profile "addons-136000"
	I0911 03:54:48.438987    1636 addons.go:69] Setting storage-provisioner=true in profile "addons-136000"
	W0911 03:54:48.439028    1636 host.go:54] host status for "addons-136000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/addons-136000/monitor: connect: connection refused
	W0911 03:54:48.439059    1636 host.go:54] host status for "addons-136000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/addons-136000/monitor: connect: connection refused
	W0911 03:54:48.439245    1636 host.go:54] host status for "addons-136000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/addons-136000/monitor: connect: connection refused
	I0911 03:54:48.449779    1636 addons.go:231] Setting addon default-storageclass=true in "addons-136000"
	W0911 03:54:48.452922    1636 addons.go:277] "addons-136000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	W0911 03:54:48.452936    1636 addons.go:277] "addons-136000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	I0911 03:54:48.452922    1636 addons.go:231] Setting addon cloud-spanner=true in "addons-136000"
	I0911 03:54:48.452954    1636 addons.go:231] Setting addon storage-provisioner=true in "addons-136000"
	I0911 03:54:48.453374    1636 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0911 03:54:48.455825    1636 out.go:177] 
	W0911 03:54:48.455840    1636 addons.go:277] "addons-136000" is not running, setting registry=true and skipping enablement (err=<nil>)
	I0911 03:54:48.458899    1636 addons.go:467] Verifying addon registry=true in "addons-136000"
	I0911 03:54:48.458906    1636 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-136000"
	I0911 03:54:48.458961    1636 host.go:66] Checking if "addons-136000" exists ...
	I0911 03:54:48.458962    1636 host.go:66] Checking if "addons-136000" exists ...
	W0911 03:54:48.467821    1636 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/addons-136000/monitor: connect: connection refused
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/addons-136000/monitor: connect: connection refused
	W0911 03:54:48.467826    1636 out.go:239] * 
	* 
	I0911 03:54:48.477804    1636 out.go:177] * Verifying csi-hostpath-driver addon...
	I0911 03:54:48.459040    1636 host.go:66] Checking if "addons-136000" exists ...
	I0911 03:54:48.459043    1636 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	W0911 03:54:48.468295    1636 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 03:54:48.478471    1636 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 03:54:48.482577    1636 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-136000" context rescaled to 1 replicas
	I0911 03:54:48.483430    1636 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0911 03:54:48.485346    1636 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0911 03:54:48.487798    1636 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0911 03:54:48.491665    1636 out.go:177] * Verifying registry addon...
	I0911 03:54:48.494786    1636 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 03:54:48.497765    1636 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.9
	I0911 03:54:48.501892    1636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 03:54:48.501910    1636 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 03:54:48.501920    1636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0911 03:54:48.502316    1636 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0911 03:54:48.504836    1636 out.go:177] 
	I0911 03:54:48.504860    1636 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/addons-136000/id_rsa Username:docker}
	I0911 03:54:48.504871    1636 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/addons-136000/id_rsa Username:docker}
	I0911 03:54:48.511256    1636 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0911 03:54:48.516867    1636 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0911 03:54:48.522944    1636 out.go:177] * Verifying Kubernetes components...
	I0911 03:54:48.524112    1636 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0911 03:54:48.526822    1636 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 03:54:48.529835    1636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 03:54:48.529843    1636 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)

                                                
                                                
** /stderr **
addons_test.go:90: out/minikube-darwin-arm64 start -p addons-136000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (46.27s)

                                                
                                    
x
+
TestCertOptions (9.92s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-890000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-890000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.635622875s)

                                                
                                                
-- stdout --
	* [cert-options-890000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-890000 in cluster cert-options-890000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-890000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-890000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-890000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-890000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-890000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (84.388375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-890000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-890000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-890000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-890000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-890000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (40.683958ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-890000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-890000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-890000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-09-11 04:08:27.035884 -0700 PDT m=+889.611200292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-890000 -n cert-options-890000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-890000 -n cert-options-890000: exit status 7 (28.995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-890000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-890000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-890000
--- FAIL: TestCertOptions (9.92s)
E0911 04:08:55.561984    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.crt: no such file or directory

                                                
                                    
x
+
TestCertExpiration (195.14s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-402000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-402000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.762691834s)

                                                
                                                
-- stdout --
	* [cert-expiration-402000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-402000 in cluster cert-expiration-402000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-402000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-402000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-402000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-402000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-402000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.207941875s)

                                                
                                                
-- stdout --
	* [cert-expiration-402000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-402000 in cluster cert-expiration-402000
	* Restarting existing qemu2 VM for "cert-expiration-402000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-402000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-402000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-402000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-402000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-402000 in cluster cert-expiration-402000
	* Restarting existing qemu2 VM for "cert-expiration-402000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-402000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-402000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-09-11 04:11:27.116394 -0700 PDT m=+1069.697399126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-402000 -n cert-expiration-402000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-402000 -n cert-expiration-402000: exit status 7 (70.493875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-402000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-402000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-402000
--- FAIL: TestCertExpiration (195.14s)

                                                
                                    
x
+
TestDockerFlags (9.97s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-282000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-282000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.717639792s)

                                                
                                                
-- stdout --
	* [docker-flags-282000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-282000 in cluster docker-flags-282000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-282000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:08:07.302441    3291 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:08:07.302552    3291 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:08:07.302554    3291 out.go:309] Setting ErrFile to fd 2...
	I0911 04:08:07.302556    3291 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:08:07.302665    3291 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:08:07.303711    3291 out.go:303] Setting JSON to false
	I0911 04:08:07.318654    3291 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2261,"bootTime":1694428226,"procs":420,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:08:07.318707    3291 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:08:07.324472    3291 out.go:177] * [docker-flags-282000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:08:07.332422    3291 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:08:07.336439    3291 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:08:07.332503    3291 notify.go:220] Checking for updates...
	I0911 04:08:07.337817    3291 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:08:07.341437    3291 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:08:07.344422    3291 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:08:07.347441    3291 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:08:07.350704    3291 config.go:182] Loaded profile config "force-systemd-flag-513000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:08:07.350774    3291 config.go:182] Loaded profile config "multinode-479000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:08:07.350823    3291 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:08:07.355452    3291 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:08:07.362360    3291 start.go:298] selected driver: qemu2
	I0911 04:08:07.362366    3291 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:08:07.362372    3291 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:08:07.364359    3291 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:08:07.367432    3291 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:08:07.370561    3291 start_flags.go:917] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0911 04:08:07.370589    3291 cni.go:84] Creating CNI manager for ""
	I0911 04:08:07.370602    3291 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:08:07.370610    3291 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:08:07.370614    3291 start_flags.go:321] config:
	{Name:docker-flags-282000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:docker-flags-282000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:08:07.375318    3291 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:08:07.383403    3291 out.go:177] * Starting control plane node docker-flags-282000 in cluster docker-flags-282000
	I0911 04:08:07.387270    3291 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:08:07.387296    3291 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:08:07.387312    3291 cache.go:57] Caching tarball of preloaded images
	I0911 04:08:07.387378    3291 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:08:07.387390    3291 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:08:07.387458    3291 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/docker-flags-282000/config.json ...
	I0911 04:08:07.387477    3291 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/docker-flags-282000/config.json: {Name:mk70fa2daf16340c950e583def2591269df6d111 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:08:07.387692    3291 start.go:365] acquiring machines lock for docker-flags-282000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:08:07.387724    3291 start.go:369] acquired machines lock for "docker-flags-282000" in 26µs
	I0911 04:08:07.387736    3291 start.go:93] Provisioning new machine with config: &{Name:docker-flags-282000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:docker-flags-282000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:08:07.387775    3291 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:08:07.392380    3291 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0911 04:08:07.408635    3291 start.go:159] libmachine.API.Create for "docker-flags-282000" (driver="qemu2")
	I0911 04:08:07.408656    3291 client.go:168] LocalClient.Create starting
	I0911 04:08:07.408713    3291 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:08:07.408742    3291 main.go:141] libmachine: Decoding PEM data...
	I0911 04:08:07.408751    3291 main.go:141] libmachine: Parsing certificate...
	I0911 04:08:07.408794    3291 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:08:07.408813    3291 main.go:141] libmachine: Decoding PEM data...
	I0911 04:08:07.408822    3291 main.go:141] libmachine: Parsing certificate...
	I0911 04:08:07.409179    3291 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:08:07.522793    3291 main.go:141] libmachine: Creating SSH key...
	I0911 04:08:07.597520    3291 main.go:141] libmachine: Creating Disk image...
	I0911 04:08:07.597525    3291 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:08:07.597660    3291 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/docker-flags-282000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/docker-flags-282000/disk.qcow2
	I0911 04:08:07.606125    3291 main.go:141] libmachine: STDOUT: 
	I0911 04:08:07.606141    3291 main.go:141] libmachine: STDERR: 
	I0911 04:08:07.606197    3291 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/docker-flags-282000/disk.qcow2 +20000M
	I0911 04:08:07.613399    3291 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:08:07.613423    3291 main.go:141] libmachine: STDERR: 
	I0911 04:08:07.613440    3291 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/docker-flags-282000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/docker-flags-282000/disk.qcow2
	I0911 04:08:07.613456    3291 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:08:07.613491    3291 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/docker-flags-282000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/docker-flags-282000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/docker-flags-282000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:be:05:d0:88:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/docker-flags-282000/disk.qcow2
	I0911 04:08:07.615078    3291 main.go:141] libmachine: STDOUT: 
	I0911 04:08:07.615091    3291 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:08:07.615108    3291 client.go:171] LocalClient.Create took 206.452667ms
	I0911 04:08:09.617204    3291 start.go:128] duration metric: createHost completed in 2.229484s
	I0911 04:08:09.617570    3291 start.go:83] releasing machines lock for "docker-flags-282000", held for 2.229904s
	W0911 04:08:09.617634    3291 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:08:09.638783    3291 out.go:177] * Deleting "docker-flags-282000" in qemu2 ...
	W0911 04:08:09.655107    3291 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:08:09.655130    3291 start.go:687] Will try again in 5 seconds ...
	I0911 04:08:14.657217    3291 start.go:365] acquiring machines lock for docker-flags-282000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:08:14.657661    3291 start.go:369] acquired machines lock for "docker-flags-282000" in 350.958µs
	I0911 04:08:14.657787    3291 start.go:93] Provisioning new machine with config: &{Name:docker-flags-282000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:docker-flags-282000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:08:14.658159    3291 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:08:14.674740    3291 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0911 04:08:14.721317    3291 start.go:159] libmachine.API.Create for "docker-flags-282000" (driver="qemu2")
	I0911 04:08:14.721380    3291 client.go:168] LocalClient.Create starting
	I0911 04:08:14.721497    3291 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:08:14.721558    3291 main.go:141] libmachine: Decoding PEM data...
	I0911 04:08:14.721575    3291 main.go:141] libmachine: Parsing certificate...
	I0911 04:08:14.721672    3291 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:08:14.721713    3291 main.go:141] libmachine: Decoding PEM data...
	I0911 04:08:14.721727    3291 main.go:141] libmachine: Parsing certificate...
	I0911 04:08:14.722537    3291 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:08:14.855148    3291 main.go:141] libmachine: Creating SSH key...
	I0911 04:08:14.938275    3291 main.go:141] libmachine: Creating Disk image...
	I0911 04:08:14.938281    3291 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:08:14.938411    3291 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/docker-flags-282000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/docker-flags-282000/disk.qcow2
	I0911 04:08:14.946763    3291 main.go:141] libmachine: STDOUT: 
	I0911 04:08:14.946776    3291 main.go:141] libmachine: STDERR: 
	I0911 04:08:14.946835    3291 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/docker-flags-282000/disk.qcow2 +20000M
	I0911 04:08:14.953873    3291 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:08:14.953898    3291 main.go:141] libmachine: STDERR: 
	I0911 04:08:14.953908    3291 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/docker-flags-282000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/docker-flags-282000/disk.qcow2
	I0911 04:08:14.953917    3291 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:08:14.953950    3291 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/docker-flags-282000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/docker-flags-282000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/docker-flags-282000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:97:58:5c:22:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/docker-flags-282000/disk.qcow2
	I0911 04:08:14.955468    3291 main.go:141] libmachine: STDOUT: 
	I0911 04:08:14.955486    3291 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:08:14.955498    3291 client.go:171] LocalClient.Create took 234.119875ms
	I0911 04:08:16.957630    3291 start.go:128] duration metric: createHost completed in 2.299510834s
	I0911 04:08:16.957719    3291 start.go:83] releasing machines lock for "docker-flags-282000", held for 2.300106292s
	W0911 04:08:16.958307    3291 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-282000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-282000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:08:16.966960    3291 out.go:177] 
	W0911 04:08:16.969007    3291 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:08:16.969029    3291 out.go:239] * 
	* 
	W0911 04:08:16.971554    3291 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:08:16.978876    3291 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-282000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-282000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-282000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (77.048333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-282000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-282000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-282000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-282000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-282000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-282000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (47.53175ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-282000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-282000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-282000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-282000\"\n"
panic.go:522: *** TestDockerFlags FAILED at 2023-09-11 04:08:17.119774 -0700 PDT m=+879.694776667
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-282000 -n docker-flags-282000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-282000 -n docker-flags-282000: exit status 7 (28.044791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-282000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-282000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-282000
--- FAIL: TestDockerFlags (9.97s)

                                                
                                    
x
+
TestForceSystemdFlag (10.55s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-513000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-513000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.345737791s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-513000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-513000 in cluster force-systemd-flag-513000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-513000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:08:01.620425    3269 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:08:01.620561    3269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:08:01.620563    3269 out.go:309] Setting ErrFile to fd 2...
	I0911 04:08:01.620566    3269 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:08:01.620673    3269 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:08:01.621661    3269 out.go:303] Setting JSON to false
	I0911 04:08:01.636504    3269 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2255,"bootTime":1694428226,"procs":420,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:08:01.636586    3269 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:08:01.640763    3269 out.go:177] * [force-systemd-flag-513000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:08:01.644689    3269 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:08:01.647629    3269 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:08:01.644761    3269 notify.go:220] Checking for updates...
	I0911 04:08:01.655515    3269 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:08:01.658632    3269 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:08:01.661601    3269 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:08:01.664593    3269 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:08:01.667846    3269 config.go:182] Loaded profile config "force-systemd-env-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:08:01.667913    3269 config.go:182] Loaded profile config "multinode-479000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:08:01.667953    3269 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:08:01.672591    3269 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:08:01.678660    3269 start.go:298] selected driver: qemu2
	I0911 04:08:01.678665    3269 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:08:01.678671    3269 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:08:01.680566    3269 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:08:01.683578    3269 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:08:01.686686    3269 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0911 04:08:01.686704    3269 cni.go:84] Creating CNI manager for ""
	I0911 04:08:01.686709    3269 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:08:01.686713    3269 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:08:01.686719    3269 start_flags.go:321] config:
	{Name:force-systemd-flag-513000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-513000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:08:01.690682    3269 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:08:01.697530    3269 out.go:177] * Starting control plane node force-systemd-flag-513000 in cluster force-systemd-flag-513000
	I0911 04:08:01.701576    3269 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:08:01.701602    3269 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:08:01.701619    3269 cache.go:57] Caching tarball of preloaded images
	I0911 04:08:01.701679    3269 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:08:01.701684    3269 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:08:01.701753    3269 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/force-systemd-flag-513000/config.json ...
	I0911 04:08:01.701766    3269 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/force-systemd-flag-513000/config.json: {Name:mkb725e78518441deb9fdbcc1bcaaddf71cda4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:08:01.701982    3269 start.go:365] acquiring machines lock for force-systemd-flag-513000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:08:01.702012    3269 start.go:369] acquired machines lock for "force-systemd-flag-513000" in 24.125µs
	I0911 04:08:01.702023    3269 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-513000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:08:01.702053    3269 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:08:01.706595    3269 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0911 04:08:01.722606    3269 start.go:159] libmachine.API.Create for "force-systemd-flag-513000" (driver="qemu2")
	I0911 04:08:01.722631    3269 client.go:168] LocalClient.Create starting
	I0911 04:08:01.722689    3269 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:08:01.722716    3269 main.go:141] libmachine: Decoding PEM data...
	I0911 04:08:01.722729    3269 main.go:141] libmachine: Parsing certificate...
	I0911 04:08:01.722771    3269 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:08:01.722789    3269 main.go:141] libmachine: Decoding PEM data...
	I0911 04:08:01.722800    3269 main.go:141] libmachine: Parsing certificate...
	I0911 04:08:01.723130    3269 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:08:01.966948    3269 main.go:141] libmachine: Creating SSH key...
	I0911 04:08:02.090485    3269 main.go:141] libmachine: Creating Disk image...
	I0911 04:08:02.090491    3269 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:08:02.090664    3269 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-flag-513000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-flag-513000/disk.qcow2
	I0911 04:08:02.099426    3269 main.go:141] libmachine: STDOUT: 
	I0911 04:08:02.099441    3269 main.go:141] libmachine: STDERR: 
	I0911 04:08:02.099491    3269 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-flag-513000/disk.qcow2 +20000M
	I0911 04:08:02.106812    3269 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:08:02.106839    3269 main.go:141] libmachine: STDERR: 
	I0911 04:08:02.106862    3269 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-flag-513000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-flag-513000/disk.qcow2
	I0911 04:08:02.106871    3269 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:08:02.106904    3269 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-flag-513000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-flag-513000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-flag-513000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:a9:ef:c3:b3:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-flag-513000/disk.qcow2
	I0911 04:08:02.108509    3269 main.go:141] libmachine: STDOUT: 
	I0911 04:08:02.108522    3269 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:08:02.108540    3269 client.go:171] LocalClient.Create took 385.915459ms
	I0911 04:08:04.110639    3269 start.go:128] duration metric: createHost completed in 2.408645666s
	I0911 04:08:04.110692    3269 start.go:83] releasing machines lock for "force-systemd-flag-513000", held for 2.408747333s
	W0911 04:08:04.110743    3269 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:08:04.118132    3269 out.go:177] * Deleting "force-systemd-flag-513000" in qemu2 ...
	W0911 04:08:04.139287    3269 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:08:04.139318    3269 start.go:687] Will try again in 5 seconds ...
	I0911 04:08:09.141417    3269 start.go:365] acquiring machines lock for force-systemd-flag-513000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:08:09.617692    3269 start.go:369] acquired machines lock for "force-systemd-flag-513000" in 476.156375ms
	I0911 04:08:09.617881    3269 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-513000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-flag-513000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:08:09.618192    3269 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:08:09.629818    3269 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0911 04:08:09.676058    3269 start.go:159] libmachine.API.Create for "force-systemd-flag-513000" (driver="qemu2")
	I0911 04:08:09.676091    3269 client.go:168] LocalClient.Create starting
	I0911 04:08:09.676203    3269 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:08:09.676254    3269 main.go:141] libmachine: Decoding PEM data...
	I0911 04:08:09.676268    3269 main.go:141] libmachine: Parsing certificate...
	I0911 04:08:09.676336    3269 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:08:09.676371    3269 main.go:141] libmachine: Decoding PEM data...
	I0911 04:08:09.676382    3269 main.go:141] libmachine: Parsing certificate...
	I0911 04:08:09.676852    3269 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:08:09.801600    3269 main.go:141] libmachine: Creating SSH key...
	I0911 04:08:09.882780    3269 main.go:141] libmachine: Creating Disk image...
	I0911 04:08:09.882786    3269 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:08:09.882925    3269 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-flag-513000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-flag-513000/disk.qcow2
	I0911 04:08:09.891365    3269 main.go:141] libmachine: STDOUT: 
	I0911 04:08:09.891382    3269 main.go:141] libmachine: STDERR: 
	I0911 04:08:09.891457    3269 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-flag-513000/disk.qcow2 +20000M
	I0911 04:08:09.898628    3269 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:08:09.898642    3269 main.go:141] libmachine: STDERR: 
	I0911 04:08:09.898655    3269 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-flag-513000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-flag-513000/disk.qcow2
	I0911 04:08:09.898670    3269 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:08:09.898706    3269 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-flag-513000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-flag-513000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-flag-513000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:0f:65:fb:8f:bd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-flag-513000/disk.qcow2
	I0911 04:08:09.900234    3269 main.go:141] libmachine: STDOUT: 
	I0911 04:08:09.900246    3269 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:08:09.900259    3269 client.go:171] LocalClient.Create took 224.171375ms
	I0911 04:08:11.902358    3269 start.go:128] duration metric: createHost completed in 2.284208542s
	I0911 04:08:11.902435    3269 start.go:83] releasing machines lock for "force-systemd-flag-513000", held for 2.284777083s
	W0911 04:08:11.902678    3269 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-513000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:08:11.911132    3269 out.go:177] 
	W0911 04:08:11.915176    3269 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:08:11.915196    3269 out.go:239] * 
	* 
	W0911 04:08:11.916657    3269 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:08:11.926097    3269 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-513000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-513000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-513000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (76.385583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-513000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-513000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-09-11 04:08:12.018797 -0700 PDT m=+874.593639001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-513000 -n force-systemd-flag-513000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-513000 -n force-systemd-flag-513000: exit status 7 (33.109583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-513000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-513000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-513000
--- FAIL: TestForceSystemdFlag (10.55s)

                                                
                                    
x
+
TestForceSystemdEnv (10s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-831000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
E0911 04:07:59.064676    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-831000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.790948292s)

                                                
                                                
-- stdout --
	* [force-systemd-env-831000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-831000 in cluster force-systemd-env-831000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-831000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:07:57.304674    3234 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:07:57.304769    3234 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:07:57.304771    3234 out.go:309] Setting ErrFile to fd 2...
	I0911 04:07:57.304773    3234 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:07:57.304890    3234 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:07:57.305943    3234 out.go:303] Setting JSON to false
	I0911 04:07:57.321485    3234 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2251,"bootTime":1694428226,"procs":417,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:07:57.321553    3234 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:07:57.326158    3234 out.go:177] * [force-systemd-env-831000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:07:57.337109    3234 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:07:57.334149    3234 notify.go:220] Checking for updates...
	I0911 04:07:57.345134    3234 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:07:57.353099    3234 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:07:57.361290    3234 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:07:57.369140    3234 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:07:57.377093    3234 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0911 04:07:57.381389    3234 config.go:182] Loaded profile config "multinode-479000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:07:57.381433    3234 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:07:57.384072    3234 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:07:57.391172    3234 start.go:298] selected driver: qemu2
	I0911 04:07:57.391182    3234 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:07:57.391189    3234 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:07:57.393285    3234 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:07:57.397104    3234 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:07:57.410679    3234 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0911 04:07:57.410708    3234 cni.go:84] Creating CNI manager for ""
	I0911 04:07:57.410717    3234 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:07:57.410720    3234 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:07:57.410734    3234 start_flags.go:321] config:
	{Name:force-systemd-env-831000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-env-831000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:07:57.415385    3234 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:07:57.420084    3234 out.go:177] * Starting control plane node force-systemd-env-831000 in cluster force-systemd-env-831000
	I0911 04:07:57.424141    3234 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:07:57.424158    3234 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:07:57.424172    3234 cache.go:57] Caching tarball of preloaded images
	I0911 04:07:57.424226    3234 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:07:57.424231    3234 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:07:57.424296    3234 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/force-systemd-env-831000/config.json ...
	I0911 04:07:57.424307    3234 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/force-systemd-env-831000/config.json: {Name:mkba5f558f96aaec2d8d838e562f2d7f5075ded5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:07:57.424487    3234 start.go:365] acquiring machines lock for force-systemd-env-831000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:07:57.424518    3234 start.go:369] acquired machines lock for "force-systemd-env-831000" in 22.208µs
	I0911 04:07:57.424529    3234 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-831000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-env-831000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:07:57.424562    3234 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:07:57.429133    3234 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0911 04:07:57.443526    3234 start.go:159] libmachine.API.Create for "force-systemd-env-831000" (driver="qemu2")
	I0911 04:07:57.443551    3234 client.go:168] LocalClient.Create starting
	I0911 04:07:57.443611    3234 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:07:57.443642    3234 main.go:141] libmachine: Decoding PEM data...
	I0911 04:07:57.443654    3234 main.go:141] libmachine: Parsing certificate...
	I0911 04:07:57.443694    3234 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:07:57.443720    3234 main.go:141] libmachine: Decoding PEM data...
	I0911 04:07:57.443730    3234 main.go:141] libmachine: Parsing certificate...
	I0911 04:07:57.444038    3234 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:07:57.660330    3234 main.go:141] libmachine: Creating SSH key...
	I0911 04:07:57.710121    3234 main.go:141] libmachine: Creating Disk image...
	I0911 04:07:57.710132    3234 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:07:57.710298    3234 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-env-831000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-env-831000/disk.qcow2
	I0911 04:07:57.719471    3234 main.go:141] libmachine: STDOUT: 
	I0911 04:07:57.719488    3234 main.go:141] libmachine: STDERR: 
	I0911 04:07:57.719570    3234 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-env-831000/disk.qcow2 +20000M
	I0911 04:07:57.727683    3234 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:07:57.727700    3234 main.go:141] libmachine: STDERR: 
	I0911 04:07:57.727731    3234 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-env-831000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-env-831000/disk.qcow2
	I0911 04:07:57.727737    3234 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:07:57.727776    3234 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-env-831000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-env-831000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-env-831000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:54:07:29:df:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-env-831000/disk.qcow2
	I0911 04:07:57.729423    3234 main.go:141] libmachine: STDOUT: 
	I0911 04:07:57.729436    3234 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:07:57.729456    3234 client.go:171] LocalClient.Create took 285.909125ms
	I0911 04:07:59.731596    3234 start.go:128] duration metric: createHost completed in 2.307077667s
	I0911 04:07:59.731677    3234 start.go:83] releasing machines lock for "force-systemd-env-831000", held for 2.307222917s
	W0911 04:07:59.731782    3234 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:07:59.738866    3234 out.go:177] * Deleting "force-systemd-env-831000" in qemu2 ...
	W0911 04:07:59.762882    3234 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:07:59.762910    3234 start.go:687] Will try again in 5 seconds ...
	I0911 04:08:04.764979    3234 start.go:365] acquiring machines lock for force-systemd-env-831000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:08:04.765448    3234 start.go:369] acquired machines lock for "force-systemd-env-831000" in 350.541µs
	I0911 04:08:04.765588    3234 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-831000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.1 ClusterName:force-systemd-env-831000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:08:04.765900    3234 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:08:04.775406    3234 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0911 04:08:04.823708    3234 start.go:159] libmachine.API.Create for "force-systemd-env-831000" (driver="qemu2")
	I0911 04:08:04.823754    3234 client.go:168] LocalClient.Create starting
	I0911 04:08:04.823888    3234 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:08:04.823962    3234 main.go:141] libmachine: Decoding PEM data...
	I0911 04:08:04.823985    3234 main.go:141] libmachine: Parsing certificate...
	I0911 04:08:04.824071    3234 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:08:04.824114    3234 main.go:141] libmachine: Decoding PEM data...
	I0911 04:08:04.824129    3234 main.go:141] libmachine: Parsing certificate...
	I0911 04:08:04.824892    3234 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:08:04.954218    3234 main.go:141] libmachine: Creating SSH key...
	I0911 04:08:05.009072    3234 main.go:141] libmachine: Creating Disk image...
	I0911 04:08:05.009077    3234 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:08:05.009221    3234 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-env-831000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-env-831000/disk.qcow2
	I0911 04:08:05.017637    3234 main.go:141] libmachine: STDOUT: 
	I0911 04:08:05.017653    3234 main.go:141] libmachine: STDERR: 
	I0911 04:08:05.017704    3234 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-env-831000/disk.qcow2 +20000M
	I0911 04:08:05.024774    3234 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:08:05.024792    3234 main.go:141] libmachine: STDERR: 
	I0911 04:08:05.024807    3234 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-env-831000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-env-831000/disk.qcow2
	I0911 04:08:05.024810    3234 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:08:05.024855    3234 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-env-831000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-env-831000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-env-831000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:36:b9:56:0e:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/force-systemd-env-831000/disk.qcow2
	I0911 04:08:05.026378    3234 main.go:141] libmachine: STDOUT: 
	I0911 04:08:05.026392    3234 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:08:05.026404    3234 client.go:171] LocalClient.Create took 202.649416ms
	I0911 04:08:07.028559    3234 start.go:128] duration metric: createHost completed in 2.262691583s
	I0911 04:08:07.028651    3234 start.go:83] releasing machines lock for "force-systemd-env-831000", held for 2.263248792s
	W0911 04:08:07.029156    3234 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-831000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-831000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:08:07.037787    3234 out.go:177] 
	W0911 04:08:07.042983    3234 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:08:07.043010    3234 out.go:239] * 
	* 
	W0911 04:08:07.045637    3234 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:08:07.054700    3234 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-831000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-831000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-831000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (75.478959ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-831000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-831000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-09-11 04:08:07.146366 -0700 PDT m=+869.721053126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-831000 -n force-systemd-env-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-831000 -n force-systemd-env-831000: exit status 7 (33.194459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-831000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-831000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-831000
--- FAIL: TestForceSystemdEnv (10.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (41.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-740000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-740000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-cfnsh" [3917ab83-9989-4db0-8df1-13ea64cad278] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-cfnsh" [3917ab83-9989-4db0-8df1-13ea64cad278] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.008220958s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.105.4:31218
functional_test.go:1660: error fetching http://192.168.105.4:31218: Get "http://192.168.105.4:31218": dial tcp 192.168.105.4:31218: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31218: Get "http://192.168.105.4:31218": dial tcp 192.168.105.4:31218: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31218: Get "http://192.168.105.4:31218": dial tcp 192.168.105.4:31218: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31218: Get "http://192.168.105.4:31218": dial tcp 192.168.105.4:31218: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31218: Get "http://192.168.105.4:31218": dial tcp 192.168.105.4:31218: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31218: Get "http://192.168.105.4:31218": dial tcp 192.168.105.4:31218: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31218: Get "http://192.168.105.4:31218": dial tcp 192.168.105.4:31218: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:31218: Get "http://192.168.105.4:31218": dial tcp 192.168.105.4:31218: connect: connection refused
functional_test.go:1680: failed to fetch http://192.168.105.4:31218: Get "http://192.168.105.4:31218": dial tcp 192.168.105.4:31218: connect: connection refused
functional_test.go:1597: service test failed - dumping debug information
functional_test.go:1598: -----------------------service failure post-mortem--------------------------------
functional_test.go:1601: (dbg) Run:  kubectl --context functional-740000 describe po hello-node-connect
functional_test.go:1605: hello-node pod describe:
Name:             hello-node-connect-7799dfb7c6-cfnsh
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-740000/192.168.105.4
Start Time:       Mon, 11 Sep 2023 03:57:40 -0700
Labels:           app=hello-node-connect
pod-template-hash=7799dfb7c6
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7799dfb7c6
Containers:
echoserver-arm:
Container ID:   docker://5f7f8bbb11afcb944e92ce624c55cd448ede8b61eb8763fb0ef446b5df013834
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 11 Sep 2023 03:57:59 -0700
Finished:     Mon, 11 Sep 2023 03:57:59 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dsvv5 (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-dsvv5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  40s                default-scheduler  Successfully assigned default/hello-node-connect-7799dfb7c6-cfnsh to functional-740000
Normal   Pulling    40s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     36s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 3.354s (4.33s including waiting)
Normal   Created    22s (x3 over 35s)  kubelet            Created container echoserver-arm
Normal   Started    22s (x3 over 35s)  kubelet            Started container echoserver-arm
Normal   Pulled     22s (x2 over 34s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    9s (x4 over 33s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-7799dfb7c6-cfnsh_default(3917ab83-9989-4db0-8df1-13ea64cad278)

                                                
                                                
functional_test.go:1607: (dbg) Run:  kubectl --context functional-740000 logs -l app=hello-node-connect
functional_test.go:1611: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1613: (dbg) Run:  kubectl --context functional-740000 describe svc hello-node-connect
functional_test.go:1617: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.104.83.146
IPs:                      10.104.83.146
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31218/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-740000 -n functional-740000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| addons  | functional-740000 addons list                                                                                        | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:57 PDT | 11 Sep 23 03:57 PDT |
	| addons  | functional-740000 addons list                                                                                        | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:57 PDT | 11 Sep 23 03:57 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-740000 service                                                                                            | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:57 PDT | 11 Sep 23 03:57 PDT |
	|         | hello-node-connect --url                                                                                             |                   |         |         |                     |                     |
	| service | functional-740000 service list                                                                                       | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	| service | functional-740000 service list                                                                                       | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|         | -o json                                                                                                              |                   |         |         |                     |                     |
	| service | functional-740000 service                                                                                            | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|         | --namespace=default --https                                                                                          |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                                     |                   |         |         |                     |                     |
	| service | functional-740000                                                                                                    | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|         | service hello-node --url                                                                                             |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                     |                   |         |         |                     |                     |
	| service | functional-740000 service                                                                                            | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|         | hello-node --url                                                                                                     |                   |         |         |                     |                     |
	| ssh     | functional-740000 ssh findmnt                                                                                        | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-740000                                                                                                 | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2075074336/001:/mount-9p      |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-740000 ssh findmnt                                                                                        | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-740000 ssh -- ls                                                                                          | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-740000 ssh cat                                                                                            | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|         | /mount-9p/test-1694429884627191000                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-740000 ssh stat                                                                                           | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|         | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-740000 ssh stat                                                                                           | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|         | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-740000 ssh sudo                                                                                           | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-740000 ssh findmnt                                                                                        | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-740000                                                                                                 | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1974937509/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-740000 ssh findmnt                                                                                        | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-740000 ssh findmnt                                                                                        | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-740000 ssh findmnt                                                                                        | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-740000 ssh findmnt                                                                                        | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-740000 ssh findmnt                                                                                        | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-740000 ssh findmnt                                                                                        | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-740000 ssh sudo                                                                                           | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT |                     |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 03:56:49
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 03:56:49.517571    1912 out.go:296] Setting OutFile to fd 1 ...
	I0911 03:56:49.517690    1912 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:56:49.517692    1912 out.go:309] Setting ErrFile to fd 2...
	I0911 03:56:49.517694    1912 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:56:49.517807    1912 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 03:56:49.518952    1912 out.go:303] Setting JSON to false
	I0911 03:56:49.535000    1912 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1583,"bootTime":1694428226,"procs":410,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 03:56:49.535057    1912 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 03:56:49.539936    1912 out.go:177] * [functional-740000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 03:56:49.545960    1912 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 03:56:49.549899    1912 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 03:56:49.545991    1912 notify.go:220] Checking for updates...
	I0911 03:56:49.556856    1912 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 03:56:49.559909    1912 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 03:56:49.562915    1912 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 03:56:49.564216    1912 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 03:56:49.567119    1912 config.go:182] Loaded profile config "functional-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 03:56:49.567171    1912 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 03:56:49.571900    1912 out.go:177] * Using the qemu2 driver based on existing profile
	I0911 03:56:49.578893    1912 start.go:298] selected driver: qemu2
	I0911 03:56:49.578898    1912 start.go:902] validating driver "qemu2" against &{Name:functional-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:functional-740000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 03:56:49.578945    1912 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 03:56:49.580776    1912 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 03:56:49.580798    1912 cni.go:84] Creating CNI manager for ""
	I0911 03:56:49.580802    1912 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 03:56:49.580807    1912 start_flags.go:321] config:
	{Name:functional-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-740000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 03:56:49.584500    1912 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 03:56:49.592884    1912 out.go:177] * Starting control plane node functional-740000 in cluster functional-740000
	I0911 03:56:49.596892    1912 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 03:56:49.596905    1912 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 03:56:49.596915    1912 cache.go:57] Caching tarball of preloaded images
	I0911 03:56:49.597153    1912 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 03:56:49.597197    1912 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 03:56:49.597266    1912 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/config.json ...
	I0911 03:56:49.597528    1912 start.go:365] acquiring machines lock for functional-740000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 03:56:49.597559    1912 start.go:369] acquired machines lock for "functional-740000" in 26.709µs
	I0911 03:56:49.597570    1912 start.go:96] Skipping create...Using existing machine configuration
	I0911 03:56:49.597573    1912 fix.go:54] fixHost starting: 
	I0911 03:56:49.598304    1912 fix.go:102] recreateIfNeeded on functional-740000: state=Running err=<nil>
	W0911 03:56:49.598313    1912 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 03:56:49.605907    1912 out.go:177] * Updating the running qemu2 "functional-740000" VM ...
	I0911 03:56:49.609895    1912 machine.go:88] provisioning docker machine ...
	I0911 03:56:49.609903    1912 buildroot.go:166] provisioning hostname "functional-740000"
	I0911 03:56:49.609939    1912 main.go:141] libmachine: Using SSH client type: native
	I0911 03:56:49.610172    1912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024323b0] 0x102434e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0911 03:56:49.610176    1912 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-740000 && echo "functional-740000" | sudo tee /etc/hostname
	I0911 03:56:49.676058    1912 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-740000
	
	I0911 03:56:49.676097    1912 main.go:141] libmachine: Using SSH client type: native
	I0911 03:56:49.676329    1912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024323b0] 0x102434e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0911 03:56:49.676339    1912 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-740000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-740000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-740000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 03:56:49.736638    1912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 03:56:49.736643    1912 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17223-1124/.minikube CaCertPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17223-1124/.minikube}
	I0911 03:56:49.736648    1912 buildroot.go:174] setting up certificates
	I0911 03:56:49.736655    1912 provision.go:83] configureAuth start
	I0911 03:56:49.736657    1912 provision.go:138] copyHostCerts
	I0911 03:56:49.736716    1912 exec_runner.go:144] found /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.pem, removing ...
	I0911 03:56:49.736719    1912 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.pem
	I0911 03:56:49.736827    1912 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.pem (1078 bytes)
	I0911 03:56:49.736986    1912 exec_runner.go:144] found /Users/jenkins/minikube-integration/17223-1124/.minikube/cert.pem, removing ...
	I0911 03:56:49.736987    1912 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17223-1124/.minikube/cert.pem
	I0911 03:56:49.737027    1912 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17223-1124/.minikube/cert.pem (1123 bytes)
	I0911 03:56:49.737110    1912 exec_runner.go:144] found /Users/jenkins/minikube-integration/17223-1124/.minikube/key.pem, removing ...
	I0911 03:56:49.737111    1912 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17223-1124/.minikube/key.pem
	I0911 03:56:49.737148    1912 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17223-1124/.minikube/key.pem (1679 bytes)
	I0911 03:56:49.737213    1912 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca-key.pem org=jenkins.functional-740000 san=[192.168.105.4 192.168.105.4 localhost 127.0.0.1 minikube functional-740000]
	I0911 03:56:49.837614    1912 provision.go:172] copyRemoteCerts
	I0911 03:56:49.837646    1912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 03:56:49.837652    1912 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/functional-740000/id_rsa Username:docker}
	I0911 03:56:49.870603    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0911 03:56:49.877587    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0911 03:56:49.886129    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 03:56:49.892737    1912 provision.go:86] duration metric: configureAuth took 156.076834ms
	I0911 03:56:49.892742    1912 buildroot.go:189] setting minikube options for container-runtime
	I0911 03:56:49.892855    1912 config.go:182] Loaded profile config "functional-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 03:56:49.892882    1912 main.go:141] libmachine: Using SSH client type: native
	I0911 03:56:49.893095    1912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024323b0] 0x102434e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0911 03:56:49.893098    1912 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0911 03:56:49.955824    1912 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0911 03:56:49.955832    1912 buildroot.go:70] root file system type: tmpfs
	I0911 03:56:49.955880    1912 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0911 03:56:49.955933    1912 main.go:141] libmachine: Using SSH client type: native
	I0911 03:56:49.956164    1912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024323b0] 0x102434e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0911 03:56:49.956199    1912 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0911 03:56:50.021699    1912 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0911 03:56:50.021743    1912 main.go:141] libmachine: Using SSH client type: native
	I0911 03:56:50.021975    1912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024323b0] 0x102434e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0911 03:56:50.021982    1912 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0911 03:56:50.085667    1912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 03:56:50.085673    1912 machine.go:91] provisioned docker machine in 475.786708ms
	I0911 03:56:50.085677    1912 start.go:300] post-start starting for "functional-740000" (driver="qemu2")
	I0911 03:56:50.085681    1912 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 03:56:50.085729    1912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 03:56:50.085736    1912 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/functional-740000/id_rsa Username:docker}
	I0911 03:56:50.120585    1912 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 03:56:50.122085    1912 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 03:56:50.122093    1912 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17223-1124/.minikube/addons for local assets ...
	I0911 03:56:50.122160    1912 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17223-1124/.minikube/files for local assets ...
	I0911 03:56:50.122262    1912 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/ssl/certs/15652.pem -> 15652.pem in /etc/ssl/certs
	I0911 03:56:50.122360    1912 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/test/nested/copy/1565/hosts -> hosts in /etc/test/nested/copy/1565
	I0911 03:56:50.122391    1912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1565
	I0911 03:56:50.125028    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/ssl/certs/15652.pem --> /etc/ssl/certs/15652.pem (1708 bytes)
	I0911 03:56:50.131840    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/test/nested/copy/1565/hosts --> /etc/test/nested/copy/1565/hosts (40 bytes)
	I0911 03:56:50.138919    1912 start.go:303] post-start completed in 53.238542ms
	I0911 03:56:50.138923    1912 fix.go:56] fixHost completed within 541.365042ms
	I0911 03:56:50.138961    1912 main.go:141] libmachine: Using SSH client type: native
	I0911 03:56:50.139200    1912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024323b0] 0x102434e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0911 03:56:50.139203    1912 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 03:56:50.201190    1912 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694429810.258259928
	
	I0911 03:56:50.201194    1912 fix.go:206] guest clock: 1694429810.258259928
	I0911 03:56:50.201197    1912 fix.go:219] Guest: 2023-09-11 03:56:50.258259928 -0700 PDT Remote: 2023-09-11 03:56:50.138924 -0700 PDT m=+0.641687709 (delta=119.335928ms)
	I0911 03:56:50.201209    1912 fix.go:190] guest clock delta is within tolerance: 119.335928ms
	I0911 03:56:50.201210    1912 start.go:83] releasing machines lock for "functional-740000", held for 603.661208ms
	I0911 03:56:50.201524    1912 ssh_runner.go:195] Run: cat /version.json
	I0911 03:56:50.201530    1912 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/functional-740000/id_rsa Username:docker}
	I0911 03:56:50.201534    1912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 03:56:50.201548    1912 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/functional-740000/id_rsa Username:docker}
	I0911 03:56:50.274603    1912 ssh_runner.go:195] Run: systemctl --version
	I0911 03:56:50.276553    1912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 03:56:50.278457    1912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 03:56:50.278487    1912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 03:56:50.281545    1912 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0911 03:56:50.281551    1912 start.go:466] detecting cgroup driver to use...
	I0911 03:56:50.281624    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 03:56:50.287575    1912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0911 03:56:50.291312    1912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0911 03:56:50.294400    1912 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0911 03:56:50.294419    1912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0911 03:56:50.297621    1912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0911 03:56:50.300600    1912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0911 03:56:50.303935    1912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0911 03:56:50.307534    1912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 03:56:50.310683    1912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0911 03:56:50.313893    1912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 03:56:50.316551    1912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 03:56:50.319820    1912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:56:50.405002    1912 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0911 03:56:50.411107    1912 start.go:466] detecting cgroup driver to use...
	I0911 03:56:50.411146    1912 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0911 03:56:50.419955    1912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 03:56:50.424885    1912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 03:56:50.431043    1912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 03:56:50.435979    1912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0911 03:56:50.440935    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 03:56:50.446042    1912 ssh_runner.go:195] Run: which cri-dockerd
	I0911 03:56:50.447416    1912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0911 03:56:50.450411    1912 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0911 03:56:50.454903    1912 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0911 03:56:50.539439    1912 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0911 03:56:50.621796    1912 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0911 03:56:50.621808    1912 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0911 03:56:50.627438    1912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:56:50.717537    1912 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0911 03:57:02.065071    1912 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.347810584s)
	I0911 03:57:02.065137    1912 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0911 03:57:02.137833    1912 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0911 03:57:02.214987    1912 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0911 03:57:02.277189    1912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:57:02.340903    1912 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0911 03:57:02.348599    1912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:57:02.418320    1912 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0911 03:57:02.443448    1912 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0911 03:57:02.443538    1912 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0911 03:57:02.446058    1912 start.go:534] Will wait 60s for crictl version
	I0911 03:57:02.446103    1912 ssh_runner.go:195] Run: which crictl
	I0911 03:57:02.447686    1912 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 03:57:02.460360    1912 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.5
	RuntimeApiVersion:  v1alpha2
	I0911 03:57:02.460430    1912 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0911 03:57:02.468347    1912 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0911 03:57:02.480610    1912 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.5 ...
	I0911 03:57:02.480761    1912 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0911 03:57:02.487504    1912 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0911 03:57:02.489063    1912 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 03:57:02.489120    1912 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0911 03:57:02.499150    1912 docker.go:636] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-740000
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0911 03:57:02.499158    1912 docker.go:566] Images already preloaded, skipping extraction
	I0911 03:57:02.499208    1912 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0911 03:57:02.504859    1912 docker.go:636] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-740000
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0911 03:57:02.504865    1912 cache_images.go:84] Images are preloaded, skipping loading
	I0911 03:57:02.504921    1912 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0911 03:57:02.512546    1912 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0911 03:57:02.512560    1912 cni.go:84] Creating CNI manager for ""
	I0911 03:57:02.512565    1912 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 03:57:02.512568    1912 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 03:57:02.512576    1912 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-740000 NodeName:functional-740000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 03:57:02.512630    1912 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-740000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 03:57:02.512661    1912 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=functional-740000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:functional-740000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0911 03:57:02.512727    1912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 03:57:02.516138    1912 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 03:57:02.516161    1912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 03:57:02.519376    1912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0911 03:57:02.524608    1912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 03:57:02.529733    1912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1953 bytes)
	I0911 03:57:02.534573    1912 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0911 03:57:02.535755    1912 certs.go:56] Setting up /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000 for IP: 192.168.105.4
	I0911 03:57:02.535762    1912 certs.go:190] acquiring lock for shared ca certs: {Name:mk38c09806021c18792511eb48bf232ccb80ec29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:57:02.535892    1912 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.key
	I0911 03:57:02.535932    1912 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17223-1124/.minikube/proxy-client-ca.key
	I0911 03:57:02.535992    1912 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/client.key
	I0911 03:57:02.536035    1912 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/apiserver.key.942c473b
	I0911 03:57:02.536068    1912 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/proxy-client.key
	I0911 03:57:02.536207    1912 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/1565.pem (1338 bytes)
	W0911 03:57:02.536230    1912 certs.go:433] ignoring /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/1565_empty.pem, impossibly tiny 0 bytes
	I0911 03:57:02.536235    1912 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 03:57:02.536257    1912 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem (1078 bytes)
	I0911 03:57:02.536278    1912 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem (1123 bytes)
	I0911 03:57:02.536295    1912 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/key.pem (1679 bytes)
	I0911 03:57:02.536335    1912 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/ssl/certs/15652.pem (1708 bytes)
	I0911 03:57:02.536699    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 03:57:02.543670    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 03:57:02.550810    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 03:57:02.557500    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0911 03:57:02.564222    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 03:57:02.571932    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 03:57:02.579401    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 03:57:02.586745    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0911 03:57:02.593647    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/ssl/certs/15652.pem --> /usr/share/ca-certificates/15652.pem (1708 bytes)
	I0911 03:57:02.600507    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 03:57:02.607810    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/1565.pem --> /usr/share/ca-certificates/1565.pem (1338 bytes)
	I0911 03:57:02.615366    1912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 03:57:02.620417    1912 ssh_runner.go:195] Run: openssl version
	I0911 03:57:02.622475    1912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1565.pem && ln -fs /usr/share/ca-certificates/1565.pem /etc/ssl/certs/1565.pem"
	I0911 03:57:02.625412    1912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1565.pem
	I0911 03:57:02.626941    1912 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 10:55 /usr/share/ca-certificates/1565.pem
	I0911 03:57:02.626957    1912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1565.pem
	I0911 03:57:02.628907    1912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1565.pem /etc/ssl/certs/51391683.0"
	I0911 03:57:02.631950    1912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15652.pem && ln -fs /usr/share/ca-certificates/15652.pem /etc/ssl/certs/15652.pem"
	I0911 03:57:02.635297    1912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15652.pem
	I0911 03:57:02.637109    1912 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 10:55 /usr/share/ca-certificates/15652.pem
	I0911 03:57:02.637122    1912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15652.pem
	I0911 03:57:02.639000    1912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15652.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 03:57:02.641779    1912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 03:57:02.644874    1912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 03:57:02.646380    1912 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:54 /usr/share/ca-certificates/minikubeCA.pem
	I0911 03:57:02.646401    1912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 03:57:02.648053    1912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 03:57:02.651179    1912 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 03:57:02.652488    1912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 03:57:02.654338    1912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 03:57:02.656168    1912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 03:57:02.658159    1912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 03:57:02.659968    1912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 03:57:02.661777    1912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 03:57:02.663608    1912 kubeadm.go:404] StartCluster: {Name:functional-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.1 ClusterName:functional-740000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 03:57:02.663670    1912 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0911 03:57:02.669542    1912 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 03:57:02.672491    1912 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0911 03:57:02.672498    1912 kubeadm.go:636] restartCluster start
	I0911 03:57:02.672522    1912 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0911 03:57:02.675543    1912 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0911 03:57:02.675816    1912 kubeconfig.go:92] found "functional-740000" server: "https://192.168.105.4:8441"
	I0911 03:57:02.676562    1912 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0911 03:57:02.679520    1912 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0911 03:57:02.679523    1912 kubeadm.go:1128] stopping kube-system containers ...
	I0911 03:57:02.679556    1912 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0911 03:57:02.686534    1912 docker.go:462] Stopping containers: [9fd9bdc0350e 6e2ca94c2389 0667be72cc80 2c9ee88482e3 6d788d6a9687 fa4547b4e52e b10509d704c0 677f73db2075 e08dd8884bdc 6acb173901ae db73d6546d4a a871e5c40f15 a4c7af6f9e07 94e5338bb00d 2c3721e9302f 8eff11f56a8a 28b97ce24746 8feb5e1b0882 c382ed08189d 62f75ef71438 92199ecc7aaf 13f9ff7851a4 a2908050622a a8b0d8a93bf8 c1e0396c5c98 5a1f6773f76b 4785ef1b4034 260f6564628d]
	I0911 03:57:02.686614    1912 ssh_runner.go:195] Run: docker stop 9fd9bdc0350e 6e2ca94c2389 0667be72cc80 2c9ee88482e3 6d788d6a9687 fa4547b4e52e b10509d704c0 677f73db2075 e08dd8884bdc 6acb173901ae db73d6546d4a a871e5c40f15 a4c7af6f9e07 94e5338bb00d 2c3721e9302f 8eff11f56a8a 28b97ce24746 8feb5e1b0882 c382ed08189d 62f75ef71438 92199ecc7aaf 13f9ff7851a4 a2908050622a a8b0d8a93bf8 c1e0396c5c98 5a1f6773f76b 4785ef1b4034 260f6564628d
	I0911 03:57:02.693696    1912 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0911 03:57:02.793500    1912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 03:57:02.797680    1912 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Sep 11 10:55 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Sep 11 10:55 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Sep 11 10:55 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Sep 11 10:55 /etc/kubernetes/scheduler.conf
	
	I0911 03:57:02.797710    1912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0911 03:57:02.801071    1912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0911 03:57:02.804550    1912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0911 03:57:02.808035    1912 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0911 03:57:02.808059    1912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0911 03:57:02.811478    1912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0911 03:57:02.814490    1912 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0911 03:57:02.814514    1912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0911 03:57:02.817249    1912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 03:57:02.820034    1912 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0911 03:57:02.820037    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 03:57:02.840324    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 03:57:03.466199    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0911 03:57:03.556198    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 03:57:03.584350    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0911 03:57:03.611608    1912 api_server.go:52] waiting for apiserver process to appear ...
	I0911 03:57:03.611661    1912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 03:57:03.623430    1912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 03:57:04.129771    1912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 03:57:04.629759    1912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 03:57:04.634108    1912 api_server.go:72] duration metric: took 1.022527s to wait for apiserver process to appear ...
	I0911 03:57:04.634113    1912 api_server.go:88] waiting for apiserver healthz status ...
	I0911 03:57:04.634121    1912 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0911 03:57:06.342578    1912 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 03:57:06.342587    1912 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 03:57:06.342592    1912 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0911 03:57:06.349677    1912 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 03:57:06.349683    1912 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 03:57:06.851716    1912 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0911 03:57:06.855426    1912 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 03:57:06.855432    1912 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 03:57:07.351696    1912 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0911 03:57:07.355967    1912 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 03:57:07.355974    1912 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 03:57:07.850183    1912 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0911 03:57:07.853740    1912 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0911 03:57:07.859415    1912 api_server.go:141] control plane version: v1.28.1
	I0911 03:57:07.859420    1912 api_server.go:131] duration metric: took 3.225386583s to wait for apiserver health ...
	I0911 03:57:07.859424    1912 cni.go:84] Creating CNI manager for ""
	I0911 03:57:07.859429    1912 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 03:57:07.862630    1912 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 03:57:07.866640    1912 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 03:57:07.869840    1912 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 03:57:07.874642    1912 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 03:57:07.879262    1912 system_pods.go:59] 7 kube-system pods found
	I0911 03:57:07.879270    1912 system_pods.go:61] "coredns-5dd5756b68-cshzx" [fab96eef-4c97-42a0-82f6-3f6404f4b9c8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 03:57:07.879273    1912 system_pods.go:61] "etcd-functional-740000" [11139528-a46e-44fa-b56c-83024d6ed373] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0911 03:57:07.879277    1912 system_pods.go:61] "kube-apiserver-functional-740000" [c1bdad66-92ee-4902-b51d-244ddadb89a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0911 03:57:07.879280    1912 system_pods.go:61] "kube-controller-manager-functional-740000" [41b2f85e-a8ca-46a2-abbd-54e8354cc183] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0911 03:57:07.879283    1912 system_pods.go:61] "kube-proxy-xmhw9" [94142ec5-c850-4cea-8eb1-2f6f78c30c0e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 03:57:07.879285    1912 system_pods.go:61] "kube-scheduler-functional-740000" [bcec002f-f589-4db4-be22-fc7de65ebb6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0911 03:57:07.879287    1912 system_pods.go:61] "storage-provisioner" [bb69cc6c-d468-4340-92f4-8386dbe0fa68] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 03:57:07.879289    1912 system_pods.go:74] duration metric: took 4.644709ms to wait for pod list to return data ...
	I0911 03:57:07.879291    1912 node_conditions.go:102] verifying NodePressure condition ...
	I0911 03:57:07.880818    1912 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0911 03:57:07.880824    1912 node_conditions.go:123] node cpu capacity is 2
	I0911 03:57:07.880829    1912 node_conditions.go:105] duration metric: took 1.535959ms to run NodePressure ...
	I0911 03:57:07.880835    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 03:57:07.970196    1912 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0911 03:57:07.972627    1912 kubeadm.go:787] kubelet initialised
	I0911 03:57:07.972631    1912 kubeadm.go:788] duration metric: took 2.429208ms waiting for restarted kubelet to initialise ...
	I0911 03:57:07.972635    1912 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 03:57:07.975393    1912 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cshzx" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:09.983905    1912 pod_ready.go:92] pod "coredns-5dd5756b68-cshzx" in "kube-system" namespace has status "Ready":"True"
	I0911 03:57:09.983910    1912 pod_ready.go:81] duration metric: took 2.008564125s waiting for pod "coredns-5dd5756b68-cshzx" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:09.983915    1912 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:11.994060    1912 pod_ready.go:102] pod "etcd-functional-740000" in "kube-system" namespace has status "Ready":"False"
	I0911 03:57:14.493054    1912 pod_ready.go:102] pod "etcd-functional-740000" in "kube-system" namespace has status "Ready":"False"
	I0911 03:57:16.493241    1912 pod_ready.go:102] pod "etcd-functional-740000" in "kube-system" namespace has status "Ready":"False"
	I0911 03:57:18.993074    1912 pod_ready.go:102] pod "etcd-functional-740000" in "kube-system" namespace has status "Ready":"False"
	I0911 03:57:19.493416    1912 pod_ready.go:92] pod "etcd-functional-740000" in "kube-system" namespace has status "Ready":"True"
	I0911 03:57:19.493423    1912 pod_ready.go:81] duration metric: took 9.509746875s waiting for pod "etcd-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:19.493428    1912 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:19.495811    1912 pod_ready.go:92] pod "kube-apiserver-functional-740000" in "kube-system" namespace has status "Ready":"True"
	I0911 03:57:19.495814    1912 pod_ready.go:81] duration metric: took 2.383167ms waiting for pod "kube-apiserver-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:19.495817    1912 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:21.504785    1912 pod_ready.go:102] pod "kube-controller-manager-functional-740000" in "kube-system" namespace has status "Ready":"False"
	I0911 03:57:22.505355    1912 pod_ready.go:92] pod "kube-controller-manager-functional-740000" in "kube-system" namespace has status "Ready":"True"
	I0911 03:57:22.505361    1912 pod_ready.go:81] duration metric: took 3.009617416s waiting for pod "kube-controller-manager-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:22.505365    1912 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xmhw9" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:22.507784    1912 pod_ready.go:92] pod "kube-proxy-xmhw9" in "kube-system" namespace has status "Ready":"True"
	I0911 03:57:22.507789    1912 pod_ready.go:81] duration metric: took 2.421959ms waiting for pod "kube-proxy-xmhw9" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:22.507792    1912 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:22.510053    1912 pod_ready.go:92] pod "kube-scheduler-functional-740000" in "kube-system" namespace has status "Ready":"True"
	I0911 03:57:22.510056    1912 pod_ready.go:81] duration metric: took 2.262125ms waiting for pod "kube-scheduler-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:22.510068    1912 pod_ready.go:38] duration metric: took 14.537790375s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 03:57:22.510075    1912 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 03:57:22.513842    1912 ops.go:34] apiserver oom_adj: -16
	I0911 03:57:22.513852    1912 kubeadm.go:640] restartCluster took 19.84184925s
	I0911 03:57:22.513854    1912 kubeadm.go:406] StartCluster complete in 19.850752333s
	I0911 03:57:22.513861    1912 settings.go:142] acquiring lock: {Name:mk1469232b3abbdcc69ed77e286fb2789adb44fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:57:22.513951    1912 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 03:57:22.514271    1912 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/kubeconfig: {Name:mk8b43c711db1489632c69fe978a061a5dcf6734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:57:22.514508    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 03:57:22.514548    1912 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 03:57:22.514582    1912 addons.go:69] Setting storage-provisioner=true in profile "functional-740000"
	I0911 03:57:22.514585    1912 addons.go:69] Setting default-storageclass=true in profile "functional-740000"
	I0911 03:57:22.514588    1912 addons.go:231] Setting addon storage-provisioner=true in "functional-740000"
	W0911 03:57:22.514591    1912 addons.go:240] addon storage-provisioner should already be in state true
	I0911 03:57:22.514591    1912 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-740000"
	I0911 03:57:22.514604    1912 config.go:182] Loaded profile config "functional-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 03:57:22.514616    1912 host.go:66] Checking if "functional-740000" exists ...
	I0911 03:57:22.520608    1912 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 03:57:22.523546    1912 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 03:57:22.523550    1912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 03:57:22.523557    1912 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/functional-740000/id_rsa Username:docker}
	I0911 03:57:22.524001    1912 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-740000" context rescaled to 1 replicas
	I0911 03:57:22.524012    1912 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 03:57:22.527554    1912 out.go:177] * Verifying Kubernetes components...
	I0911 03:57:22.526050    1912 addons.go:231] Setting addon default-storageclass=true in "functional-740000"
	W0911 03:57:22.533493    1912 addons.go:240] addon default-storageclass should already be in state true
	I0911 03:57:22.533507    1912 host.go:66] Checking if "functional-740000" exists ...
	I0911 03:57:22.533534    1912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 03:57:22.534216    1912 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 03:57:22.534219    1912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 03:57:22.534225    1912 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/functional-740000/id_rsa Username:docker}
	I0911 03:57:22.557658    1912 node_ready.go:35] waiting up to 6m0s for node "functional-740000" to be "Ready" ...
	I0911 03:57:22.557673    1912 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0911 03:57:22.559546    1912 node_ready.go:49] node "functional-740000" has status "Ready":"True"
	I0911 03:57:22.559556    1912 node_ready.go:38] duration metric: took 1.88125ms waiting for node "functional-740000" to be "Ready" ...
	I0911 03:57:22.559559    1912 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 03:57:22.564510    1912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 03:57:22.594466    1912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 03:57:22.695217    1912 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cshzx" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:22.902791    1912 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0911 03:57:22.910824    1912 addons.go:502] enable addons completed in 396.312375ms: enabled=[storage-provisioner default-storageclass]
	I0911 03:57:23.093786    1912 pod_ready.go:92] pod "coredns-5dd5756b68-cshzx" in "kube-system" namespace has status "Ready":"True"
	I0911 03:57:23.093791    1912 pod_ready.go:81] duration metric: took 398.578958ms waiting for pod "coredns-5dd5756b68-cshzx" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:23.093796    1912 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:23.493558    1912 pod_ready.go:92] pod "etcd-functional-740000" in "kube-system" namespace has status "Ready":"True"
	I0911 03:57:23.493563    1912 pod_ready.go:81] duration metric: took 399.774958ms waiting for pod "etcd-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:23.493567    1912 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:23.892679    1912 pod_ready.go:92] pod "kube-apiserver-functional-740000" in "kube-system" namespace has status "Ready":"True"
	I0911 03:57:23.892684    1912 pod_ready.go:81] duration metric: took 399.124917ms waiting for pod "kube-apiserver-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:23.892688    1912 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:24.293304    1912 pod_ready.go:92] pod "kube-controller-manager-functional-740000" in "kube-system" namespace has status "Ready":"True"
	I0911 03:57:24.293308    1912 pod_ready.go:81] duration metric: took 400.628459ms waiting for pod "kube-controller-manager-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:24.293334    1912 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xmhw9" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:24.693836    1912 pod_ready.go:92] pod "kube-proxy-xmhw9" in "kube-system" namespace has status "Ready":"True"
	I0911 03:57:24.693841    1912 pod_ready.go:81] duration metric: took 400.515084ms waiting for pod "kube-proxy-xmhw9" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:24.693846    1912 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:25.093582    1912 pod_ready.go:92] pod "kube-scheduler-functional-740000" in "kube-system" namespace has status "Ready":"True"
	I0911 03:57:25.093589    1912 pod_ready.go:81] duration metric: took 399.749666ms waiting for pod "kube-scheduler-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:25.093593    1912 pod_ready.go:38] duration metric: took 2.534094459s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 03:57:25.093604    1912 api_server.go:52] waiting for apiserver process to appear ...
	I0911 03:57:25.093688    1912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 03:57:25.098145    1912 api_server.go:72] duration metric: took 2.574189708s to wait for apiserver process to appear ...
	I0911 03:57:25.098149    1912 api_server.go:88] waiting for apiserver healthz status ...
	I0911 03:57:25.098155    1912 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0911 03:57:25.101291    1912 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0911 03:57:25.101932    1912 api_server.go:141] control plane version: v1.28.1
	I0911 03:57:25.101935    1912 api_server.go:131] duration metric: took 3.784709ms to wait for apiserver health ...
	I0911 03:57:25.101937    1912 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 03:57:25.295410    1912 system_pods.go:59] 7 kube-system pods found
	I0911 03:57:25.295416    1912 system_pods.go:61] "coredns-5dd5756b68-cshzx" [fab96eef-4c97-42a0-82f6-3f6404f4b9c8] Running
	I0911 03:57:25.295418    1912 system_pods.go:61] "etcd-functional-740000" [11139528-a46e-44fa-b56c-83024d6ed373] Running
	I0911 03:57:25.295420    1912 system_pods.go:61] "kube-apiserver-functional-740000" [c1bdad66-92ee-4902-b51d-244ddadb89a4] Running
	I0911 03:57:25.295422    1912 system_pods.go:61] "kube-controller-manager-functional-740000" [41b2f85e-a8ca-46a2-abbd-54e8354cc183] Running
	I0911 03:57:25.295424    1912 system_pods.go:61] "kube-proxy-xmhw9" [94142ec5-c850-4cea-8eb1-2f6f78c30c0e] Running
	I0911 03:57:25.295425    1912 system_pods.go:61] "kube-scheduler-functional-740000" [bcec002f-f589-4db4-be22-fc7de65ebb6f] Running
	I0911 03:57:25.295427    1912 system_pods.go:61] "storage-provisioner" [bb69cc6c-d468-4340-92f4-8386dbe0fa68] Running
	I0911 03:57:25.295429    1912 system_pods.go:74] duration metric: took 193.495042ms to wait for pod list to return data ...
	I0911 03:57:25.295432    1912 default_sa.go:34] waiting for default service account to be created ...
	I0911 03:57:25.493573    1912 default_sa.go:45] found service account: "default"
	I0911 03:57:25.493578    1912 default_sa.go:55] duration metric: took 198.149625ms for default service account to be created ...
	I0911 03:57:25.493581    1912 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 03:57:25.694255    1912 system_pods.go:86] 7 kube-system pods found
	I0911 03:57:25.694261    1912 system_pods.go:89] "coredns-5dd5756b68-cshzx" [fab96eef-4c97-42a0-82f6-3f6404f4b9c8] Running
	I0911 03:57:25.694264    1912 system_pods.go:89] "etcd-functional-740000" [11139528-a46e-44fa-b56c-83024d6ed373] Running
	I0911 03:57:25.694266    1912 system_pods.go:89] "kube-apiserver-functional-740000" [c1bdad66-92ee-4902-b51d-244ddadb89a4] Running
	I0911 03:57:25.694268    1912 system_pods.go:89] "kube-controller-manager-functional-740000" [41b2f85e-a8ca-46a2-abbd-54e8354cc183] Running
	I0911 03:57:25.694270    1912 system_pods.go:89] "kube-proxy-xmhw9" [94142ec5-c850-4cea-8eb1-2f6f78c30c0e] Running
	I0911 03:57:25.694272    1912 system_pods.go:89] "kube-scheduler-functional-740000" [bcec002f-f589-4db4-be22-fc7de65ebb6f] Running
	I0911 03:57:25.694273    1912 system_pods.go:89] "storage-provisioner" [bb69cc6c-d468-4340-92f4-8386dbe0fa68] Running
	I0911 03:57:25.694275    1912 system_pods.go:126] duration metric: took 200.698209ms to wait for k8s-apps to be running ...
	I0911 03:57:25.694277    1912 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 03:57:25.694328    1912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 03:57:25.699330    1912 system_svc.go:56] duration metric: took 5.049792ms WaitForService to wait for kubelet.
	I0911 03:57:25.699334    1912 kubeadm.go:581] duration metric: took 3.175394666s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 03:57:25.699342    1912 node_conditions.go:102] verifying NodePressure condition ...
	I0911 03:57:25.893622    1912 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0911 03:57:25.893629    1912 node_conditions.go:123] node cpu capacity is 2
	I0911 03:57:25.893634    1912 node_conditions.go:105] duration metric: took 194.294958ms to run NodePressure ...
	I0911 03:57:25.893639    1912 start.go:228] waiting for startup goroutines ...
	I0911 03:57:25.893641    1912 start.go:233] waiting for cluster config update ...
	I0911 03:57:25.893645    1912 start.go:242] writing updated cluster config ...
	I0911 03:57:25.893970    1912 ssh_runner.go:195] Run: rm -f paused
	I0911 03:57:25.922985    1912 start.go:600] kubectl: 1.27.2, cluster: 1.28.1 (minor skew: 1)
	I0911 03:57:25.927973    1912 out.go:177] * Done! kubectl is now configured to use "functional-740000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-11 10:55:34 UTC, ends at Mon 2023-09-11 10:58:22 UTC. --
	Sep 11 10:58:06 functional-740000 dockerd[6622]: time="2023-09-11T10:58:06.074940870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 10:58:06 functional-740000 dockerd[6622]: time="2023-09-11T10:58:06.074952536Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 11 10:58:06 functional-740000 dockerd[6622]: time="2023-09-11T10:58:06.074961536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 10:58:06 functional-740000 cri-dockerd[6879]: time="2023-09-11T10:58:06Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e8f47473c7592d88b25d76f82254dd58fe1dad38d483bdb9bdaf9de9e0b0ea08/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 11 10:58:07 functional-740000 cri-dockerd[6879]: time="2023-09-11T10:58:07Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Sep 11 10:58:07 functional-740000 dockerd[6622]: time="2023-09-11T10:58:07.358611290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 11 10:58:07 functional-740000 dockerd[6622]: time="2023-09-11T10:58:07.358668205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 10:58:07 functional-740000 dockerd[6622]: time="2023-09-11T10:58:07.358682205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 11 10:58:07 functional-740000 dockerd[6622]: time="2023-09-11T10:58:07.358692621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 10:58:07 functional-740000 dockerd[6616]: time="2023-09-11T10:58:07.423296848Z" level=info msg="ignoring event" container=00aedc14b8ef703e05639d6bde4961ce26a1e136f7fa690bf66761440b16e94a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 11 10:58:07 functional-740000 dockerd[6622]: time="2023-09-11T10:58:07.423837040Z" level=info msg="shim disconnected" id=00aedc14b8ef703e05639d6bde4961ce26a1e136f7fa690bf66761440b16e94a namespace=moby
	Sep 11 10:58:07 functional-740000 dockerd[6622]: time="2023-09-11T10:58:07.423865831Z" level=warning msg="cleaning up after shim disconnected" id=00aedc14b8ef703e05639d6bde4961ce26a1e136f7fa690bf66761440b16e94a namespace=moby
	Sep 11 10:58:07 functional-740000 dockerd[6622]: time="2023-09-11T10:58:07.423871289Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 11 10:58:09 functional-740000 dockerd[6622]: time="2023-09-11T10:58:09.241790553Z" level=info msg="shim disconnected" id=e8f47473c7592d88b25d76f82254dd58fe1dad38d483bdb9bdaf9de9e0b0ea08 namespace=moby
	Sep 11 10:58:09 functional-740000 dockerd[6622]: time="2023-09-11T10:58:09.241823177Z" level=warning msg="cleaning up after shim disconnected" id=e8f47473c7592d88b25d76f82254dd58fe1dad38d483bdb9bdaf9de9e0b0ea08 namespace=moby
	Sep 11 10:58:09 functional-740000 dockerd[6622]: time="2023-09-11T10:58:09.241827677Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 11 10:58:09 functional-740000 dockerd[6616]: time="2023-09-11T10:58:09.241881342Z" level=info msg="ignoring event" container=e8f47473c7592d88b25d76f82254dd58fe1dad38d483bdb9bdaf9de9e0b0ea08 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 11 10:58:12 functional-740000 dockerd[6622]: time="2023-09-11T10:58:12.725434432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 11 10:58:12 functional-740000 dockerd[6622]: time="2023-09-11T10:58:12.725627801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 10:58:12 functional-740000 dockerd[6622]: time="2023-09-11T10:58:12.725635093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 11 10:58:12 functional-740000 dockerd[6622]: time="2023-09-11T10:58:12.725639343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 10:58:12 functional-740000 dockerd[6616]: time="2023-09-11T10:58:12.779492323Z" level=info msg="ignoring event" container=f08553c36a6be169ea384650d6c523bb72d723dbe464af1285deec19218485b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 11 10:58:12 functional-740000 dockerd[6622]: time="2023-09-11T10:58:12.779561238Z" level=info msg="shim disconnected" id=f08553c36a6be169ea384650d6c523bb72d723dbe464af1285deec19218485b4 namespace=moby
	Sep 11 10:58:12 functional-740000 dockerd[6622]: time="2023-09-11T10:58:12.779586029Z" level=warning msg="cleaning up after shim disconnected" id=f08553c36a6be169ea384650d6c523bb72d723dbe464af1285deec19218485b4 namespace=moby
	Sep 11 10:58:12 functional-740000 dockerd[6622]: time="2023-09-11T10:58:12.779589904Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	f08553c36a6be       72565bf5bbedf                                                                                         10 seconds ago       Exited              echoserver-arm            2                   6e27f48482508
	00aedc14b8ef7       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   15 seconds ago       Exited              mount-munger              0                   e8f47473c7592
	5f7f8bbb11afc       72565bf5bbedf                                                                                         23 seconds ago       Exited              echoserver-arm            2                   7df91eddb9810
	9be6b26127a57       nginx@sha256:6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153                         31 seconds ago       Running             myfrontend                0                   5e00cd4d8606f
	ce9e4ab852b55       nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                         48 seconds ago       Running             nginx                     0                   ac7c94646fa11
	6ced62e735a2c       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       3                   852699c4a2880
	30d6269da315e       97e04611ad434                                                                                         About a minute ago   Running             coredns                   2                   5c8e576d6bf3e
	de4a10dbf990e       812f5241df7fd                                                                                         About a minute ago   Running             kube-proxy                2                   c509fa8239989
	e347e144afa51       b4a5a57e99492                                                                                         About a minute ago   Running             kube-scheduler            2                   db9cb7dc6189f
	3713dda03afbe       9cdd6470f48c8                                                                                         About a minute ago   Running             etcd                      2                   5ffcbb72f80b3
	023d1ba072cbe       8b6e1980b7584                                                                                         About a minute ago   Running             kube-controller-manager   2                   09e281baef59c
	d5ce1ab54e283       b29fb62480892                                                                                         About a minute ago   Running             kube-apiserver            0                   513830cbf8e78
	9fd9bdc0350e1       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       2                   94e5338bb00d0
	0667be72cc803       97e04611ad434                                                                                         About a minute ago   Exited              coredns                   1                   e08dd8884bdcf
	2c9ee88482e3f       9cdd6470f48c8                                                                                         About a minute ago   Exited              etcd                      1                   a871e5c40f159
	fa4547b4e52ec       b4a5a57e99492                                                                                         About a minute ago   Exited              kube-scheduler            1                   6acb173901ae8
	b10509d704c0e       8b6e1980b7584                                                                                         About a minute ago   Exited              kube-controller-manager   1                   db73d6546d4a5
	677f73db20759       812f5241df7fd                                                                                         About a minute ago   Exited              kube-proxy                1                   a4c7af6f9e070
	
	* 
	* ==> coredns [0667be72cc80] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34865 - 4202 "HINFO IN 5628587765548081682.7266691883950973552. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004921693s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [30d6269da315] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50356 - 33523 "HINFO IN 8020413452089812584.2274111581267159359. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004478005s
	[INFO] 10.244.0.1:45366 - 49062 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000131498s
	[INFO] 10.244.0.1:63507 - 37227 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.00008879s
	[INFO] 10.244.0.1:6392 - 23891 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000029s
	[INFO] 10.244.0.1:14894 - 59511 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.000927779s
	[INFO] 10.244.0.1:60849 - 27832 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000142081s
	[INFO] 10.244.0.1:37535 - 47561 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000039s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-740000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-740000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=functional-740000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T03_55_51_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 10:55:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-740000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 10:58:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 10:58:07 +0000   Mon, 11 Sep 2023 10:55:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 10:58:07 +0000   Mon, 11 Sep 2023 10:55:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 10:58:07 +0000   Mon, 11 Sep 2023 10:55:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 10:58:07 +0000   Mon, 11 Sep 2023 10:55:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-740000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc4e1b5ddc2b43169e12cb4be28b15ea
	  System UUID:                fc4e1b5ddc2b43169e12cb4be28b15ea
	  Boot ID:                    16b08f7b-fa6a-4d0e-b063-3a9bda515c0e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.5
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-759d89bdcc-r2wpj                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     hello-node-connect-7799dfb7c6-cfnsh          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  kube-system                 coredns-5dd5756b68-cshzx                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m18s
	  kube-system                 etcd-functional-740000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m31s
	  kube-system                 kube-apiserver-functional-740000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-controller-manager-functional-740000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 kube-proxy-xmhw9                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 kube-scheduler-functional-740000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m17s              kube-proxy       
	  Normal  Starting                 74s                kube-proxy       
	  Normal  Starting                 116s               kube-proxy       
	  Normal  Starting                 2m31s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m31s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m31s              kubelet          Node functional-740000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m31s              kubelet          Node functional-740000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m31s              kubelet          Node functional-740000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m28s              kubelet          Node functional-740000 status is now: NodeReady
	  Normal  RegisteredNode           2m19s              node-controller  Node functional-740000 event: Registered Node functional-740000 in Controller
	  Normal  RegisteredNode           104s               node-controller  Node functional-740000 event: Registered Node functional-740000 in Controller
	  Normal  Starting                 79s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  79s (x8 over 79s)  kubelet          Node functional-740000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    79s (x8 over 79s)  kubelet          Node functional-740000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     79s (x7 over 79s)  kubelet          Node functional-740000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           64s                node-controller  Node functional-740000 event: Registered Node functional-740000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.087733] systemd-fstab-generator[3705]: Ignoring "noauto" for root device
	[  +0.092090] systemd-fstab-generator[3718]: Ignoring "noauto" for root device
	[  +5.157788] kauditd_printk_skb: 25 callbacks suppressed
	[  +6.252877] systemd-fstab-generator[4288]: Ignoring "noauto" for root device
	[  +0.064699] systemd-fstab-generator[4299]: Ignoring "noauto" for root device
	[  +0.064529] systemd-fstab-generator[4310]: Ignoring "noauto" for root device
	[  +0.062971] systemd-fstab-generator[4321]: Ignoring "noauto" for root device
	[  +0.099579] systemd-fstab-generator[4394]: Ignoring "noauto" for root device
	[  +5.100979] kauditd_printk_skb: 34 callbacks suppressed
	[ +24.203098] systemd-fstab-generator[6153]: Ignoring "noauto" for root device
	[  +0.137048] systemd-fstab-generator[6186]: Ignoring "noauto" for root device
	[  +0.078455] systemd-fstab-generator[6197]: Ignoring "noauto" for root device
	[  +0.097942] systemd-fstab-generator[6210]: Ignoring "noauto" for root device
	[Sep11 10:57] systemd-fstab-generator[6767]: Ignoring "noauto" for root device
	[  +0.080457] systemd-fstab-generator[6778]: Ignoring "noauto" for root device
	[  +0.063525] systemd-fstab-generator[6789]: Ignoring "noauto" for root device
	[  +0.063583] systemd-fstab-generator[6800]: Ignoring "noauto" for root device
	[  +0.072071] systemd-fstab-generator[6863]: Ignoring "noauto" for root device
	[  +1.139670] systemd-fstab-generator[7117]: Ignoring "noauto" for root device
	[  +4.651979] kauditd_printk_skb: 29 callbacks suppressed
	[ +24.368928] kauditd_printk_skb: 16 callbacks suppressed
	[ +10.560897] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +3.866293] kauditd_printk_skb: 4 callbacks suppressed
	[ +10.265446] kauditd_printk_skb: 1 callbacks suppressed
	[Sep11 10:58] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [2c9ee88482e3] <==
	* {"level":"info","ts":"2023-09-11T10:56:23.867217Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T10:56:25.421844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-11T10:56:25.421979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-11T10:56:25.422053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-09-11T10:56:25.422124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-09-11T10:56:25.422164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-11T10:56:25.422236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-09-11T10:56:25.42229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-11T10:56:25.425186Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-740000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-11T10:56:25.425204Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T10:56:25.425258Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T10:56:25.428733Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-11T10:56:25.43012Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-11T10:56:25.430184Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-11T10:56:25.431107Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-09-11T10:56:50.787114Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-11T10:56:50.787141Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-740000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2023-09-11T10:56:50.787192Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-11T10:56:50.787233Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-11T10:56:50.796118Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-11T10:56:50.796136Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-11T10:56:50.796165Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-09-11T10:56:50.797735Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-11T10:56:50.797764Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-11T10:56:50.797768Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-740000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> etcd [3713dda03afb] <==
	* {"level":"info","ts":"2023-09-11T10:57:04.636305Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T10:57:04.636326Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T10:57:04.636444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-09-11T10:57:04.636538Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-09-11T10:57:04.636585Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T10:57:04.636696Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T10:57:04.638193Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-11T10:57:04.639926Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-11T10:57:04.64Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-11T10:57:04.640241Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-11T10:57:04.640266Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-11T10:57:05.800654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-09-11T10:57:05.800765Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-09-11T10:57:05.800796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-11T10:57:05.800821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-09-11T10:57:05.800837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-09-11T10:57:05.800855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-09-11T10:57:05.800871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-09-11T10:57:05.803984Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-740000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-11T10:57:05.80405Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T10:57:05.805733Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-11T10:57:05.806069Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T10:57:05.807517Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-09-11T10:57:05.817649Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-11T10:57:05.817675Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  10:58:22 up 2 min,  0 users,  load average: 0.84, 0.35, 0.13
	Linux functional-740000 5.10.57 #1 SMP PREEMPT Thu Aug 24 12:01:08 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [d5ce1ab54e28] <==
	* I0911 10:57:06.473819       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0911 10:57:06.473826       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0911 10:57:06.474075       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0911 10:57:06.475606       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0911 10:57:06.475630       1 aggregator.go:166] initial CRD sync complete...
	I0911 10:57:06.475638       1 autoregister_controller.go:141] Starting autoregister controller
	I0911 10:57:06.475645       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0911 10:57:06.475656       1 cache.go:39] Caches are synced for autoregister controller
	E0911 10:57:06.475689       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0911 10:57:06.476001       1 shared_informer.go:318] Caches are synced for configmaps
	I0911 10:57:06.476031       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0911 10:57:06.476336       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0911 10:57:07.375297       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0911 10:57:07.996752       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0911 10:57:07.999989       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0911 10:57:08.013483       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0911 10:57:08.022186       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0911 10:57:08.024600       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0911 10:57:18.726792       1 controller.go:624] quota admission added evaluator for: endpoints
	I0911 10:57:18.927185       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0911 10:57:27.387699       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.185.17"}
	I0911 10:57:31.581679       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.97.96.178"}
	I0911 10:57:40.977302       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0911 10:57:41.020411       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.83.146"}
	I0911 10:57:57.279580       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.109.60.119"}
	
	* 
	* ==> kube-controller-manager [023d1ba072cb] <==
	* I0911 10:57:19.263583       1 shared_informer.go:318] Caches are synced for garbage collector
	I0911 10:57:19.263596       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0911 10:57:37.387539       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'k8s.io/minikube-hostpath' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0911 10:57:40.980561       1 event.go:307] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-7799dfb7c6 to 1"
	I0911 10:57:40.988372       1 event.go:307] "Event occurred" object="default/hello-node-connect-7799dfb7c6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-7799dfb7c6-cfnsh"
	I0911 10:57:40.993503       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="12.838649ms"
	I0911 10:57:41.003640       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="10.113689ms"
	I0911 10:57:41.003664       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="10.667µs"
	I0911 10:57:41.003694       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="12.417µs"
	I0911 10:57:41.006021       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="74.874µs"
	I0911 10:57:47.010963       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="16.166µs"
	I0911 10:57:48.023968       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="49.456µs"
	I0911 10:57:49.025964       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="40.04µs"
	I0911 10:57:57.238783       1 event.go:307] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-759d89bdcc to 1"
	I0911 10:57:57.241878       1 event.go:307] "Event occurred" object="default/hello-node-759d89bdcc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-759d89bdcc-r2wpj"
	I0911 10:57:57.245859       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="6.944261ms"
	I0911 10:57:57.250065       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="4.002488ms"
	I0911 10:57:57.250203       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="30.666µs"
	I0911 10:57:57.254351       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="29.708µs"
	I0911 10:57:58.079239       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="18.416µs"
	I0911 10:57:59.085736       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="23.416µs"
	I0911 10:58:00.093615       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="25.416µs"
	I0911 10:58:12.705536       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-connect-7799dfb7c6" duration="28.333µs"
	I0911 10:58:12.715221       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="25.083µs"
	I0911 10:58:13.233408       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-node-759d89bdcc" duration="27µs"
	
	* 
	* ==> kube-controller-manager [b10509d704c0] <==
	* I0911 10:56:38.400850       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0911 10:56:38.412362       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0911 10:56:38.418551       1 shared_informer.go:318] Caches are synced for node
	I0911 10:56:38.418587       1 range_allocator.go:174] "Sending events to api server"
	I0911 10:56:38.418604       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0911 10:56:38.418606       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0911 10:56:38.418609       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0911 10:56:38.419623       1 shared_informer.go:318] Caches are synced for persistent volume
	I0911 10:56:38.420738       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0911 10:56:38.421811       1 shared_informer.go:318] Caches are synced for GC
	I0911 10:56:38.429801       1 shared_informer.go:318] Caches are synced for PVC protection
	I0911 10:56:38.430877       1 shared_informer.go:318] Caches are synced for PV protection
	I0911 10:56:38.432062       1 shared_informer.go:318] Caches are synced for expand
	I0911 10:56:38.433135       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0911 10:56:38.487676       1 shared_informer.go:318] Caches are synced for deployment
	I0911 10:56:38.490881       1 shared_informer.go:318] Caches are synced for HPA
	I0911 10:56:38.521273       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0911 10:56:38.521339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.414µs"
	I0911 10:56:38.580583       1 shared_informer.go:318] Caches are synced for disruption
	I0911 10:56:38.628675       1 shared_informer.go:318] Caches are synced for resource quota
	I0911 10:56:38.631863       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0911 10:56:38.635135       1 shared_informer.go:318] Caches are synced for resource quota
	I0911 10:56:38.958857       1 shared_informer.go:318] Caches are synced for garbage collector
	I0911 10:56:38.963845       1 shared_informer.go:318] Caches are synced for garbage collector
	I0911 10:56:38.963861       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [677f73db2075] <==
	* I0911 10:56:23.416459       1 server_others.go:69] "Using iptables proxy"
	E0911 10:56:23.417312       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-740000": dial tcp 192.168.105.4:8441: connect: connection refused
	I0911 10:56:26.075901       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0911 10:56:26.098024       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0911 10:56:26.098041       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0911 10:56:26.098806       1 server_others.go:152] "Using iptables Proxier"
	I0911 10:56:26.098828       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0911 10:56:26.098901       1 server.go:846] "Version info" version="v1.28.1"
	I0911 10:56:26.098911       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 10:56:26.099394       1 config.go:188] "Starting service config controller"
	I0911 10:56:26.099407       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0911 10:56:26.099414       1 config.go:97] "Starting endpoint slice config controller"
	I0911 10:56:26.099416       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0911 10:56:26.099530       1 config.go:315] "Starting node config controller"
	I0911 10:56:26.099536       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0911 10:56:26.199947       1 shared_informer.go:318] Caches are synced for node config
	I0911 10:56:26.199952       1 shared_informer.go:318] Caches are synced for service config
	I0911 10:56:26.199958       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [de4a10dbf990] <==
	* I0911 10:57:08.251984       1 server_others.go:69] "Using iptables proxy"
	I0911 10:57:08.257734       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0911 10:57:08.271675       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0911 10:57:08.271722       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0911 10:57:08.272355       1 server_others.go:152] "Using iptables Proxier"
	I0911 10:57:08.272374       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0911 10:57:08.272439       1 server.go:846] "Version info" version="v1.28.1"
	I0911 10:57:08.272442       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 10:57:08.273179       1 config.go:188] "Starting service config controller"
	I0911 10:57:08.273187       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0911 10:57:08.273201       1 config.go:97] "Starting endpoint slice config controller"
	I0911 10:57:08.273203       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0911 10:57:08.275923       1 config.go:315] "Starting node config controller"
	I0911 10:57:08.275930       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0911 10:57:08.373304       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0911 10:57:08.373303       1 shared_informer.go:318] Caches are synced for service config
	I0911 10:57:08.376097       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [e347e144afa5] <==
	* I0911 10:57:04.855434       1 serving.go:348] Generated self-signed cert in-memory
	W0911 10:57:06.403277       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0911 10:57:06.403393       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0911 10:57:06.403418       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0911 10:57:06.403435       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0911 10:57:06.439325       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0911 10:57:06.439375       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 10:57:06.440509       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0911 10:57:06.440812       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0911 10:57:06.440840       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0911 10:57:06.440857       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0911 10:57:06.541111       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [fa4547b4e52e] <==
	* I0911 10:56:24.223945       1 serving.go:348] Generated self-signed cert in-memory
	W0911 10:56:26.054661       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0911 10:56:26.054735       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0911 10:56:26.054770       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0911 10:56:26.054787       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0911 10:56:26.074658       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0911 10:56:26.074805       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 10:56:26.076286       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0911 10:56:26.076381       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0911 10:56:26.076408       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0911 10:56:26.076430       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0911 10:56:26.176754       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0911 10:56:50.814508       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0911 10:56:50.814536       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0911 10:56:50.814581       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0911 10:56:50.814681       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-11 10:55:34 UTC, ends at Mon 2023-09-11 10:58:22 UTC. --
	Sep 11 10:58:00 functional-740000 kubelet[7123]: I0911 10:58:00.089224    7123 scope.go:117] "RemoveContainer" containerID="4a8107ff0dba509ee809f06961f876a526fad8e1215c88fae8524a6c6af38484"
	Sep 11 10:58:00 functional-740000 kubelet[7123]: I0911 10:58:00.089415    7123 scope.go:117] "RemoveContainer" containerID="5f7f8bbb11afcb944e92ce624c55cd448ede8b61eb8763fb0ef446b5df013834"
	Sep 11 10:58:00 functional-740000 kubelet[7123]: E0911 10:58:00.089498    7123 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-cfnsh_default(3917ab83-9989-4db0-8df1-13ea64cad278)\"" pod="default/hello-node-connect-7799dfb7c6-cfnsh" podUID="3917ab83-9989-4db0-8df1-13ea64cad278"
	Sep 11 10:58:03 functional-740000 kubelet[7123]: E0911 10:58:03.707544    7123 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 11 10:58:03 functional-740000 kubelet[7123]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 11 10:58:03 functional-740000 kubelet[7123]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 11 10:58:03 functional-740000 kubelet[7123]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 11 10:58:03 functional-740000 kubelet[7123]: I0911 10:58:03.759899    7123 scope.go:117] "RemoveContainer" containerID="6d788d6a9687a8fd4936d3a7dc7bd717d9377c62d13f09449605915b1d2dbe51"
	Sep 11 10:58:05 functional-740000 kubelet[7123]: I0911 10:58:05.716783    7123 topology_manager.go:215] "Topology Admit Handler" podUID="bf77116c-80c0-46a1-a227-0343477e1125" podNamespace="default" podName="busybox-mount"
	Sep 11 10:58:05 functional-740000 kubelet[7123]: I0911 10:58:05.851544    7123 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmkgm\" (UniqueName: \"kubernetes.io/projected/bf77116c-80c0-46a1-a227-0343477e1125-kube-api-access-xmkgm\") pod \"busybox-mount\" (UID: \"bf77116c-80c0-46a1-a227-0343477e1125\") " pod="default/busybox-mount"
	Sep 11 10:58:05 functional-740000 kubelet[7123]: I0911 10:58:05.851573    7123 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/bf77116c-80c0-46a1-a227-0343477e1125-test-volume\") pod \"busybox-mount\" (UID: \"bf77116c-80c0-46a1-a227-0343477e1125\") " pod="default/busybox-mount"
	Sep 11 10:58:06 functional-740000 kubelet[7123]: I0911 10:58:06.182371    7123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8f47473c7592d88b25d76f82254dd58fe1dad38d483bdb9bdaf9de9e0b0ea08"
	Sep 11 10:58:09 functional-740000 kubelet[7123]: I0911 10:58:09.367858    7123 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/bf77116c-80c0-46a1-a227-0343477e1125-test-volume\") pod \"bf77116c-80c0-46a1-a227-0343477e1125\" (UID: \"bf77116c-80c0-46a1-a227-0343477e1125\") "
	Sep 11 10:58:09 functional-740000 kubelet[7123]: I0911 10:58:09.367899    7123 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmkgm\" (UniqueName: \"kubernetes.io/projected/bf77116c-80c0-46a1-a227-0343477e1125-kube-api-access-xmkgm\") pod \"bf77116c-80c0-46a1-a227-0343477e1125\" (UID: \"bf77116c-80c0-46a1-a227-0343477e1125\") "
	Sep 11 10:58:09 functional-740000 kubelet[7123]: I0911 10:58:09.368052    7123 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf77116c-80c0-46a1-a227-0343477e1125-test-volume" (OuterVolumeSpecName: "test-volume") pod "bf77116c-80c0-46a1-a227-0343477e1125" (UID: "bf77116c-80c0-46a1-a227-0343477e1125"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 11 10:58:09 functional-740000 kubelet[7123]: I0911 10:58:09.370428    7123 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf77116c-80c0-46a1-a227-0343477e1125-kube-api-access-xmkgm" (OuterVolumeSpecName: "kube-api-access-xmkgm") pod "bf77116c-80c0-46a1-a227-0343477e1125" (UID: "bf77116c-80c0-46a1-a227-0343477e1125"). InnerVolumeSpecName "kube-api-access-xmkgm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 11 10:58:09 functional-740000 kubelet[7123]: I0911 10:58:09.468602    7123 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/bf77116c-80c0-46a1-a227-0343477e1125-test-volume\") on node \"functional-740000\" DevicePath \"\""
	Sep 11 10:58:09 functional-740000 kubelet[7123]: I0911 10:58:09.468615    7123 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xmkgm\" (UniqueName: \"kubernetes.io/projected/bf77116c-80c0-46a1-a227-0343477e1125-kube-api-access-xmkgm\") on node \"functional-740000\" DevicePath \"\""
	Sep 11 10:58:10 functional-740000 kubelet[7123]: I0911 10:58:10.207571    7123 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8f47473c7592d88b25d76f82254dd58fe1dad38d483bdb9bdaf9de9e0b0ea08"
	Sep 11 10:58:12 functional-740000 kubelet[7123]: I0911 10:58:12.698730    7123 scope.go:117] "RemoveContainer" containerID="9cd8aa488710002e341d64e035cf398cbfb32e717295b6ab2d14643740c6d996"
	Sep 11 10:58:12 functional-740000 kubelet[7123]: I0911 10:58:12.698919    7123 scope.go:117] "RemoveContainer" containerID="5f7f8bbb11afcb944e92ce624c55cd448ede8b61eb8763fb0ef446b5df013834"
	Sep 11 10:58:12 functional-740000 kubelet[7123]: E0911 10:58:12.698987    7123 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-cfnsh_default(3917ab83-9989-4db0-8df1-13ea64cad278)\"" pod="default/hello-node-connect-7799dfb7c6-cfnsh" podUID="3917ab83-9989-4db0-8df1-13ea64cad278"
	Sep 11 10:58:13 functional-740000 kubelet[7123]: I0911 10:58:13.226943    7123 scope.go:117] "RemoveContainer" containerID="9cd8aa488710002e341d64e035cf398cbfb32e717295b6ab2d14643740c6d996"
	Sep 11 10:58:13 functional-740000 kubelet[7123]: I0911 10:58:13.227088    7123 scope.go:117] "RemoveContainer" containerID="f08553c36a6be169ea384650d6c523bb72d723dbe464af1285deec19218485b4"
	Sep 11 10:58:13 functional-740000 kubelet[7123]: E0911 10:58:13.227184    7123 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-r2wpj_default(d75329d7-a3e4-4016-b10a-f1b4fb538f6a)\"" pod="default/hello-node-759d89bdcc-r2wpj" podUID="d75329d7-a3e4-4016-b10a-f1b4fb538f6a"
	
	* 
	* ==> storage-provisioner [6ced62e735a2] <==
	* I0911 10:57:08.284461       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0911 10:57:08.290344       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0911 10:57:08.290874       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0911 10:57:25.682518       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0911 10:57:25.682577       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-740000_8adb2fb2-fdb3-4d85-8621-b557ecb640c1!
	I0911 10:57:25.682906       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4cedab41-1b9f-428c-9666-e3b5ac5e696e", APIVersion:"v1", ResourceVersion:"569", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-740000_8adb2fb2-fdb3-4d85-8621-b557ecb640c1 became leader
	I0911 10:57:25.783642       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-740000_8adb2fb2-fdb3-4d85-8621-b557ecb640c1!
	I0911 10:57:37.387680       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0911 10:57:37.387731       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    0884111b-d0f3-4c78-a050-3c5e86bca768 348 0 2023-09-11 10:56:05 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-09-11 10:56:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-f8a2ae22-0a2f-46e5-8dc8-f8e21032a461 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  f8a2ae22-0a2f-46e5-8dc8-f8e21032a461 629 0 2023-09-11 10:57:37 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-09-11 10:57:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-09-11 10:57:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0911 10:57:37.388134       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-f8a2ae22-0a2f-46e5-8dc8-f8e21032a461" provisioned
	I0911 10:57:37.388140       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0911 10:57:37.388143       1 volume_store.go:212] Trying to save persistentvolume "pvc-f8a2ae22-0a2f-46e5-8dc8-f8e21032a461"
	I0911 10:57:37.388895       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f8a2ae22-0a2f-46e5-8dc8-f8e21032a461", APIVersion:"v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0911 10:57:37.394062       1 volume_store.go:219] persistentvolume "pvc-f8a2ae22-0a2f-46e5-8dc8-f8e21032a461" saved
	I0911 10:57:37.394766       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"f8a2ae22-0a2f-46e5-8dc8-f8e21032a461", APIVersion:"v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-f8a2ae22-0a2f-46e5-8dc8-f8e21032a461
	
	* 
	* ==> storage-provisioner [9fd9bdc0350e] <==
	* I0911 10:56:40.222078       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0911 10:56:40.227055       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0911 10:56:40.227075       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-740000 -n functional-740000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-740000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-740000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-740000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-740000/192.168.105.4
	Start Time:       Mon, 11 Sep 2023 03:58:05 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://00aedc14b8ef703e05639d6bde4961ce26a1e136f7fa690bf66761440b16e94a
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 11 Sep 2023 03:58:07 -0700
	      Finished:     Mon, 11 Sep 2023 03:58:07 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xmkgm (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-xmkgm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  17s   default-scheduler  Successfully assigned default/busybox-mount to functional-740000
	  Normal  Pulling    16s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     15s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.123s (1.123s including waiting)
	  Normal  Created    15s   kubelet            Created container mount-munger
	  Normal  Started    15s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (41.98s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "echo hello"
functional_test.go:1724: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-740000 ssh "echo hello": exit status 80 (47.940917ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/functional-740000/monitor: connect: connection refused
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_ssh_d94a149758de690cb366888a5d8e6efc18cafe43_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1729: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-740000 ssh \"echo hello\"" : exit status 80
functional_test.go:1733: expected minikube ssh command output to be -"hello"- but got *"\n\n"*. args "out/minikube-darwin-arm64 -p functional-740000 ssh \"echo hello\""
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "cat /etc/hostname"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-740000 -n functional-740000
helpers_test.go:244: <<< TestFunctional/parallel/SSHCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/SSHCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/SSHCmd logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                 Args                                                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| cache   | list                                                                                                  | minikube          | jenkins | v1.31.2 | 11 Sep 23 03:56 PDT | 11 Sep 23 03:56 PDT |
	| ssh     | functional-740000 ssh sudo                                                                            | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:56 PDT | 11 Sep 23 03:56 PDT |
	|         | crictl images                                                                                         |                   |         |         |                     |                     |
	| ssh     | functional-740000                                                                                     | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:56 PDT | 11 Sep 23 03:56 PDT |
	|         | ssh sudo docker rmi                                                                                   |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-740000 ssh                                                                                 | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:56 PDT |                     |
	|         | sudo crictl inspecti                                                                                  |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                                                          |                   |         |         |                     |                     |
	| cache   | functional-740000 cache reload                                                                        | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:56 PDT | 11 Sep 23 03:56 PDT |
	| ssh     | functional-740000 ssh                                                                                 | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:56 PDT | 11 Sep 23 03:56 PDT |
	|         | sudo crictl inspecti                                                                                  |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                                                          |                   |         |         |                     |                     |
	| cache   | delete                                                                                                | minikube          | jenkins | v1.31.2 | 11 Sep 23 03:56 PDT | 11 Sep 23 03:56 PDT |
	|         | registry.k8s.io/pause:3.1                                                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                                                | minikube          | jenkins | v1.31.2 | 11 Sep 23 03:56 PDT | 11 Sep 23 03:56 PDT |
	|         | registry.k8s.io/pause:latest                                                                          |                   |         |         |                     |                     |
	| kubectl | functional-740000 kubectl --                                                                          | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:56 PDT | 11 Sep 23 03:56 PDT |
	|         | --context functional-740000                                                                           |                   |         |         |                     |                     |
	|         | get pods                                                                                              |                   |         |         |                     |                     |
	| start   | -p functional-740000                                                                                  | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:56 PDT | 11 Sep 23 03:57 PDT |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision                              |                   |         |         |                     |                     |
	|         | --wait=all                                                                                            |                   |         |         |                     |                     |
	| service | invalid-svc -p                                                                                        | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:57 PDT |                     |
	|         | functional-740000                                                                                     |                   |         |         |                     |                     |
	| cp      | functional-740000 cp                                                                                  | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:57 PDT | 11 Sep 23 03:57 PDT |
	|         | testdata/cp-test.txt                                                                                  |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                              |                   |         |         |                     |                     |
	| config  | functional-740000 config unset                                                                        | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:57 PDT | 11 Sep 23 03:57 PDT |
	|         | cpus                                                                                                  |                   |         |         |                     |                     |
	| config  | functional-740000 config get                                                                          | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:57 PDT |                     |
	|         | cpus                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-740000 ssh -n                                                                              | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:57 PDT | 11 Sep 23 03:57 PDT |
	|         | functional-740000 sudo cat                                                                            |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                              |                   |         |         |                     |                     |
	| config  | functional-740000 config set                                                                          | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:57 PDT | 11 Sep 23 03:57 PDT |
	|         | cpus 2                                                                                                |                   |         |         |                     |                     |
	| config  | functional-740000 config get                                                                          | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:57 PDT | 11 Sep 23 03:57 PDT |
	|         | cpus                                                                                                  |                   |         |         |                     |                     |
	| cp      | functional-740000 cp functional-740000:/home/docker/cp-test.txt                                       | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:57 PDT | 11 Sep 23 03:57 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd433482928/001/cp-test.txt |                   |         |         |                     |                     |
	| config  | functional-740000 config unset                                                                        | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:57 PDT | 11 Sep 23 03:57 PDT |
	|         | cpus                                                                                                  |                   |         |         |                     |                     |
	| config  | functional-740000 config get                                                                          | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:57 PDT |                     |
	|         | cpus                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-740000 ssh -n                                                                              | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:57 PDT | 11 Sep 23 03:57 PDT |
	|         | functional-740000 sudo cat                                                                            |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                              |                   |         |         |                     |                     |
	| ssh     | functional-740000 ssh echo                                                                            | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:57 PDT |                     |
	|         | hello                                                                                                 |                   |         |         |                     |                     |
	| ssh     | functional-740000 ssh cat                                                                             | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:57 PDT | 11 Sep 23 03:57 PDT |
	|         | /etc/hostname                                                                                         |                   |         |         |                     |                     |
	| tunnel  | functional-740000 tunnel                                                                              | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:57 PDT |                     |
	|         | --alsologtostderr                                                                                     |                   |         |         |                     |                     |
	| tunnel  | functional-740000 tunnel                                                                              | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:57 PDT |                     |
	|         | --alsologtostderr                                                                                     |                   |         |         |                     |                     |
	|---------|-------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 03:56:49
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 03:56:49.517571    1912 out.go:296] Setting OutFile to fd 1 ...
	I0911 03:56:49.517690    1912 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:56:49.517692    1912 out.go:309] Setting ErrFile to fd 2...
	I0911 03:56:49.517694    1912 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:56:49.517807    1912 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 03:56:49.518952    1912 out.go:303] Setting JSON to false
	I0911 03:56:49.535000    1912 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1583,"bootTime":1694428226,"procs":410,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 03:56:49.535057    1912 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 03:56:49.539936    1912 out.go:177] * [functional-740000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 03:56:49.545960    1912 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 03:56:49.549899    1912 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 03:56:49.545991    1912 notify.go:220] Checking for updates...
	I0911 03:56:49.556856    1912 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 03:56:49.559909    1912 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 03:56:49.562915    1912 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 03:56:49.564216    1912 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 03:56:49.567119    1912 config.go:182] Loaded profile config "functional-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 03:56:49.567171    1912 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 03:56:49.571900    1912 out.go:177] * Using the qemu2 driver based on existing profile
	I0911 03:56:49.578893    1912 start.go:298] selected driver: qemu2
	I0911 03:56:49.578898    1912 start.go:902] validating driver "qemu2" against &{Name:functional-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:functional-740000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 03:56:49.578945    1912 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 03:56:49.580776    1912 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 03:56:49.580798    1912 cni.go:84] Creating CNI manager for ""
	I0911 03:56:49.580802    1912 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 03:56:49.580807    1912 start_flags.go:321] config:
	{Name:functional-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-740000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 03:56:49.584500    1912 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 03:56:49.592884    1912 out.go:177] * Starting control plane node functional-740000 in cluster functional-740000
	I0911 03:56:49.596892    1912 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 03:56:49.596905    1912 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 03:56:49.596915    1912 cache.go:57] Caching tarball of preloaded images
	I0911 03:56:49.597153    1912 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 03:56:49.597197    1912 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 03:56:49.597266    1912 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/config.json ...
	I0911 03:56:49.597528    1912 start.go:365] acquiring machines lock for functional-740000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 03:56:49.597559    1912 start.go:369] acquired machines lock for "functional-740000" in 26.709µs
	I0911 03:56:49.597570    1912 start.go:96] Skipping create...Using existing machine configuration
	I0911 03:56:49.597573    1912 fix.go:54] fixHost starting: 
	I0911 03:56:49.598304    1912 fix.go:102] recreateIfNeeded on functional-740000: state=Running err=<nil>
	W0911 03:56:49.598313    1912 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 03:56:49.605907    1912 out.go:177] * Updating the running qemu2 "functional-740000" VM ...
	I0911 03:56:49.609895    1912 machine.go:88] provisioning docker machine ...
	I0911 03:56:49.609903    1912 buildroot.go:166] provisioning hostname "functional-740000"
	I0911 03:56:49.609939    1912 main.go:141] libmachine: Using SSH client type: native
	I0911 03:56:49.610172    1912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024323b0] 0x102434e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0911 03:56:49.610176    1912 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-740000 && echo "functional-740000" | sudo tee /etc/hostname
	I0911 03:56:49.676058    1912 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-740000
	
	I0911 03:56:49.676097    1912 main.go:141] libmachine: Using SSH client type: native
	I0911 03:56:49.676329    1912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024323b0] 0x102434e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0911 03:56:49.676339    1912 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-740000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-740000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-740000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 03:56:49.736638    1912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 03:56:49.736643    1912 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17223-1124/.minikube CaCertPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17223-1124/.minikube}
	I0911 03:56:49.736648    1912 buildroot.go:174] setting up certificates
	I0911 03:56:49.736655    1912 provision.go:83] configureAuth start
	I0911 03:56:49.736657    1912 provision.go:138] copyHostCerts
	I0911 03:56:49.736716    1912 exec_runner.go:144] found /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.pem, removing ...
	I0911 03:56:49.736719    1912 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.pem
	I0911 03:56:49.736827    1912 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.pem (1078 bytes)
	I0911 03:56:49.736986    1912 exec_runner.go:144] found /Users/jenkins/minikube-integration/17223-1124/.minikube/cert.pem, removing ...
	I0911 03:56:49.736987    1912 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17223-1124/.minikube/cert.pem
	I0911 03:56:49.737027    1912 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17223-1124/.minikube/cert.pem (1123 bytes)
	I0911 03:56:49.737110    1912 exec_runner.go:144] found /Users/jenkins/minikube-integration/17223-1124/.minikube/key.pem, removing ...
	I0911 03:56:49.737111    1912 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17223-1124/.minikube/key.pem
	I0911 03:56:49.737148    1912 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17223-1124/.minikube/key.pem (1679 bytes)
	I0911 03:56:49.737213    1912 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca-key.pem org=jenkins.functional-740000 san=[192.168.105.4 192.168.105.4 localhost 127.0.0.1 minikube functional-740000]
	I0911 03:56:49.837614    1912 provision.go:172] copyRemoteCerts
	I0911 03:56:49.837646    1912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 03:56:49.837652    1912 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/functional-740000/id_rsa Username:docker}
	I0911 03:56:49.870603    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0911 03:56:49.877587    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0911 03:56:49.886129    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 03:56:49.892737    1912 provision.go:86] duration metric: configureAuth took 156.076834ms
	I0911 03:56:49.892742    1912 buildroot.go:189] setting minikube options for container-runtime
	I0911 03:56:49.892855    1912 config.go:182] Loaded profile config "functional-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 03:56:49.892882    1912 main.go:141] libmachine: Using SSH client type: native
	I0911 03:56:49.893095    1912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024323b0] 0x102434e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0911 03:56:49.893098    1912 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0911 03:56:49.955824    1912 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0911 03:56:49.955832    1912 buildroot.go:70] root file system type: tmpfs
	I0911 03:56:49.955880    1912 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0911 03:56:49.955933    1912 main.go:141] libmachine: Using SSH client type: native
	I0911 03:56:49.956164    1912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024323b0] 0x102434e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0911 03:56:49.956199    1912 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0911 03:56:50.021699    1912 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0911 03:56:50.021743    1912 main.go:141] libmachine: Using SSH client type: native
	I0911 03:56:50.021975    1912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024323b0] 0x102434e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0911 03:56:50.021982    1912 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0911 03:56:50.085667    1912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 03:56:50.085673    1912 machine.go:91] provisioned docker machine in 475.786708ms
	I0911 03:56:50.085677    1912 start.go:300] post-start starting for "functional-740000" (driver="qemu2")
	I0911 03:56:50.085681    1912 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 03:56:50.085729    1912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 03:56:50.085736    1912 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/functional-740000/id_rsa Username:docker}
	I0911 03:56:50.120585    1912 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 03:56:50.122085    1912 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 03:56:50.122093    1912 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17223-1124/.minikube/addons for local assets ...
	I0911 03:56:50.122160    1912 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17223-1124/.minikube/files for local assets ...
	I0911 03:56:50.122262    1912 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/ssl/certs/15652.pem -> 15652.pem in /etc/ssl/certs
	I0911 03:56:50.122360    1912 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/test/nested/copy/1565/hosts -> hosts in /etc/test/nested/copy/1565
	I0911 03:56:50.122391    1912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1565
	I0911 03:56:50.125028    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/ssl/certs/15652.pem --> /etc/ssl/certs/15652.pem (1708 bytes)
	I0911 03:56:50.131840    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/test/nested/copy/1565/hosts --> /etc/test/nested/copy/1565/hosts (40 bytes)
	I0911 03:56:50.138919    1912 start.go:303] post-start completed in 53.238542ms
	I0911 03:56:50.138923    1912 fix.go:56] fixHost completed within 541.365042ms
	I0911 03:56:50.138961    1912 main.go:141] libmachine: Using SSH client type: native
	I0911 03:56:50.139200    1912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1024323b0] 0x102434e10 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0911 03:56:50.139203    1912 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 03:56:50.201190    1912 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694429810.258259928
	
	I0911 03:56:50.201194    1912 fix.go:206] guest clock: 1694429810.258259928
	I0911 03:56:50.201197    1912 fix.go:219] Guest: 2023-09-11 03:56:50.258259928 -0700 PDT Remote: 2023-09-11 03:56:50.138924 -0700 PDT m=+0.641687709 (delta=119.335928ms)
	I0911 03:56:50.201209    1912 fix.go:190] guest clock delta is within tolerance: 119.335928ms
	I0911 03:56:50.201210    1912 start.go:83] releasing machines lock for "functional-740000", held for 603.661208ms
	I0911 03:56:50.201524    1912 ssh_runner.go:195] Run: cat /version.json
	I0911 03:56:50.201530    1912 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/functional-740000/id_rsa Username:docker}
	I0911 03:56:50.201534    1912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 03:56:50.201548    1912 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/functional-740000/id_rsa Username:docker}
	I0911 03:56:50.274603    1912 ssh_runner.go:195] Run: systemctl --version
	I0911 03:56:50.276553    1912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 03:56:50.278457    1912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 03:56:50.278487    1912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 03:56:50.281545    1912 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0911 03:56:50.281551    1912 start.go:466] detecting cgroup driver to use...
	I0911 03:56:50.281624    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 03:56:50.287575    1912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0911 03:56:50.291312    1912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0911 03:56:50.294400    1912 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0911 03:56:50.294419    1912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0911 03:56:50.297621    1912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0911 03:56:50.300600    1912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0911 03:56:50.303935    1912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0911 03:56:50.307534    1912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 03:56:50.310683    1912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0911 03:56:50.313893    1912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 03:56:50.316551    1912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 03:56:50.319820    1912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:56:50.405002    1912 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0911 03:56:50.411107    1912 start.go:466] detecting cgroup driver to use...
	I0911 03:56:50.411146    1912 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0911 03:56:50.419955    1912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 03:56:50.424885    1912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 03:56:50.431043    1912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 03:56:50.435979    1912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0911 03:56:50.440935    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 03:56:50.446042    1912 ssh_runner.go:195] Run: which cri-dockerd
	I0911 03:56:50.447416    1912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0911 03:56:50.450411    1912 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0911 03:56:50.454903    1912 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0911 03:56:50.539439    1912 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0911 03:56:50.621796    1912 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0911 03:56:50.621808    1912 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0911 03:56:50.627438    1912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:56:50.717537    1912 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0911 03:57:02.065071    1912 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.347810584s)
	I0911 03:57:02.065137    1912 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0911 03:57:02.137833    1912 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0911 03:57:02.214987    1912 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0911 03:57:02.277189    1912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:57:02.340903    1912 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0911 03:57:02.348599    1912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:57:02.418320    1912 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0911 03:57:02.443448    1912 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0911 03:57:02.443538    1912 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0911 03:57:02.446058    1912 start.go:534] Will wait 60s for crictl version
	I0911 03:57:02.446103    1912 ssh_runner.go:195] Run: which crictl
	I0911 03:57:02.447686    1912 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 03:57:02.460360    1912 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.5
	RuntimeApiVersion:  v1alpha2
	I0911 03:57:02.460430    1912 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0911 03:57:02.468347    1912 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0911 03:57:02.480610    1912 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.5 ...
	I0911 03:57:02.480761    1912 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0911 03:57:02.487504    1912 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0911 03:57:02.489063    1912 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 03:57:02.489120    1912 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0911 03:57:02.499150    1912 docker.go:636] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-740000
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0911 03:57:02.499158    1912 docker.go:566] Images already preloaded, skipping extraction
	I0911 03:57:02.499208    1912 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0911 03:57:02.504859    1912 docker.go:636] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-740000
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0911 03:57:02.504865    1912 cache_images.go:84] Images are preloaded, skipping loading
	I0911 03:57:02.504921    1912 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0911 03:57:02.512546    1912 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0911 03:57:02.512560    1912 cni.go:84] Creating CNI manager for ""
	I0911 03:57:02.512565    1912 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 03:57:02.512568    1912 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 03:57:02.512576    1912 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-740000 NodeName:functional-740000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 03:57:02.512630    1912 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-740000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 03:57:02.512661    1912 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=functional-740000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:functional-740000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0911 03:57:02.512727    1912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 03:57:02.516138    1912 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 03:57:02.516161    1912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 03:57:02.519376    1912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0911 03:57:02.524608    1912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 03:57:02.529733    1912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1953 bytes)
	I0911 03:57:02.534573    1912 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0911 03:57:02.535755    1912 certs.go:56] Setting up /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000 for IP: 192.168.105.4
	I0911 03:57:02.535762    1912 certs.go:190] acquiring lock for shared ca certs: {Name:mk38c09806021c18792511eb48bf232ccb80ec29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:57:02.535892    1912 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.key
	I0911 03:57:02.535932    1912 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17223-1124/.minikube/proxy-client-ca.key
	I0911 03:57:02.535992    1912 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/client.key
	I0911 03:57:02.536035    1912 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/apiserver.key.942c473b
	I0911 03:57:02.536068    1912 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/proxy-client.key
	I0911 03:57:02.536207    1912 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/1565.pem (1338 bytes)
	W0911 03:57:02.536230    1912 certs.go:433] ignoring /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/1565_empty.pem, impossibly tiny 0 bytes
	I0911 03:57:02.536235    1912 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 03:57:02.536257    1912 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem (1078 bytes)
	I0911 03:57:02.536278    1912 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem (1123 bytes)
	I0911 03:57:02.536295    1912 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/key.pem (1679 bytes)
	I0911 03:57:02.536335    1912 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/ssl/certs/15652.pem (1708 bytes)
	I0911 03:57:02.536699    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 03:57:02.543670    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 03:57:02.550810    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 03:57:02.557500    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0911 03:57:02.564222    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 03:57:02.571932    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 03:57:02.579401    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 03:57:02.586745    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0911 03:57:02.593647    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/ssl/certs/15652.pem --> /usr/share/ca-certificates/15652.pem (1708 bytes)
	I0911 03:57:02.600507    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 03:57:02.607810    1912 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/1565.pem --> /usr/share/ca-certificates/1565.pem (1338 bytes)
	I0911 03:57:02.615366    1912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 03:57:02.620417    1912 ssh_runner.go:195] Run: openssl version
	I0911 03:57:02.622475    1912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1565.pem && ln -fs /usr/share/ca-certificates/1565.pem /etc/ssl/certs/1565.pem"
	I0911 03:57:02.625412    1912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1565.pem
	I0911 03:57:02.626941    1912 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 10:55 /usr/share/ca-certificates/1565.pem
	I0911 03:57:02.626957    1912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1565.pem
	I0911 03:57:02.628907    1912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1565.pem /etc/ssl/certs/51391683.0"
	I0911 03:57:02.631950    1912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15652.pem && ln -fs /usr/share/ca-certificates/15652.pem /etc/ssl/certs/15652.pem"
	I0911 03:57:02.635297    1912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15652.pem
	I0911 03:57:02.637109    1912 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 10:55 /usr/share/ca-certificates/15652.pem
	I0911 03:57:02.637122    1912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15652.pem
	I0911 03:57:02.639000    1912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15652.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 03:57:02.641779    1912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 03:57:02.644874    1912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 03:57:02.646380    1912 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:54 /usr/share/ca-certificates/minikubeCA.pem
	I0911 03:57:02.646401    1912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 03:57:02.648053    1912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 03:57:02.651179    1912 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 03:57:02.652488    1912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 03:57:02.654338    1912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 03:57:02.656168    1912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 03:57:02.658159    1912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 03:57:02.659968    1912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 03:57:02.661777    1912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 03:57:02.663608    1912 kubeadm.go:404] StartCluster: {Name:functional-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.1 ClusterName:functional-740000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 03:57:02.663670    1912 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0911 03:57:02.669542    1912 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 03:57:02.672491    1912 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0911 03:57:02.672498    1912 kubeadm.go:636] restartCluster start
	I0911 03:57:02.672522    1912 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0911 03:57:02.675543    1912 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0911 03:57:02.675816    1912 kubeconfig.go:92] found "functional-740000" server: "https://192.168.105.4:8441"
	I0911 03:57:02.676562    1912 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0911 03:57:02.679520    1912 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0911 03:57:02.679523    1912 kubeadm.go:1128] stopping kube-system containers ...
	I0911 03:57:02.679556    1912 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0911 03:57:02.686534    1912 docker.go:462] Stopping containers: [9fd9bdc0350e 6e2ca94c2389 0667be72cc80 2c9ee88482e3 6d788d6a9687 fa4547b4e52e b10509d704c0 677f73db2075 e08dd8884bdc 6acb173901ae db73d6546d4a a871e5c40f15 a4c7af6f9e07 94e5338bb00d 2c3721e9302f 8eff11f56a8a 28b97ce24746 8feb5e1b0882 c382ed08189d 62f75ef71438 92199ecc7aaf 13f9ff7851a4 a2908050622a a8b0d8a93bf8 c1e0396c5c98 5a1f6773f76b 4785ef1b4034 260f6564628d]
	I0911 03:57:02.686614    1912 ssh_runner.go:195] Run: docker stop 9fd9bdc0350e 6e2ca94c2389 0667be72cc80 2c9ee88482e3 6d788d6a9687 fa4547b4e52e b10509d704c0 677f73db2075 e08dd8884bdc 6acb173901ae db73d6546d4a a871e5c40f15 a4c7af6f9e07 94e5338bb00d 2c3721e9302f 8eff11f56a8a 28b97ce24746 8feb5e1b0882 c382ed08189d 62f75ef71438 92199ecc7aaf 13f9ff7851a4 a2908050622a a8b0d8a93bf8 c1e0396c5c98 5a1f6773f76b 4785ef1b4034 260f6564628d
	I0911 03:57:02.693696    1912 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0911 03:57:02.793500    1912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 03:57:02.797680    1912 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Sep 11 10:55 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Sep 11 10:55 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Sep 11 10:55 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Sep 11 10:55 /etc/kubernetes/scheduler.conf
	
	I0911 03:57:02.797710    1912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0911 03:57:02.801071    1912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0911 03:57:02.804550    1912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0911 03:57:02.808035    1912 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0911 03:57:02.808059    1912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0911 03:57:02.811478    1912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0911 03:57:02.814490    1912 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0911 03:57:02.814514    1912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0911 03:57:02.817249    1912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 03:57:02.820034    1912 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0911 03:57:02.820037    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 03:57:02.840324    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 03:57:03.466199    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0911 03:57:03.556198    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 03:57:03.584350    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0911 03:57:03.611608    1912 api_server.go:52] waiting for apiserver process to appear ...
	I0911 03:57:03.611661    1912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 03:57:03.623430    1912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 03:57:04.129771    1912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 03:57:04.629759    1912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 03:57:04.634108    1912 api_server.go:72] duration metric: took 1.022527s to wait for apiserver process to appear ...
	I0911 03:57:04.634113    1912 api_server.go:88] waiting for apiserver healthz status ...
	I0911 03:57:04.634121    1912 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0911 03:57:06.342578    1912 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 03:57:06.342587    1912 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 03:57:06.342592    1912 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0911 03:57:06.349677    1912 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 03:57:06.349683    1912 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 03:57:06.851716    1912 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0911 03:57:06.855426    1912 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 03:57:06.855432    1912 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 03:57:07.351696    1912 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0911 03:57:07.355967    1912 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 03:57:07.355974    1912 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 03:57:07.850183    1912 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0911 03:57:07.853740    1912 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0911 03:57:07.859415    1912 api_server.go:141] control plane version: v1.28.1
	I0911 03:57:07.859420    1912 api_server.go:131] duration metric: took 3.225386583s to wait for apiserver health ...
	I0911 03:57:07.859424    1912 cni.go:84] Creating CNI manager for ""
	I0911 03:57:07.859429    1912 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 03:57:07.862630    1912 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 03:57:07.866640    1912 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 03:57:07.869840    1912 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 03:57:07.874642    1912 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 03:57:07.879262    1912 system_pods.go:59] 7 kube-system pods found
	I0911 03:57:07.879270    1912 system_pods.go:61] "coredns-5dd5756b68-cshzx" [fab96eef-4c97-42a0-82f6-3f6404f4b9c8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 03:57:07.879273    1912 system_pods.go:61] "etcd-functional-740000" [11139528-a46e-44fa-b56c-83024d6ed373] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0911 03:57:07.879277    1912 system_pods.go:61] "kube-apiserver-functional-740000" [c1bdad66-92ee-4902-b51d-244ddadb89a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0911 03:57:07.879280    1912 system_pods.go:61] "kube-controller-manager-functional-740000" [41b2f85e-a8ca-46a2-abbd-54e8354cc183] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0911 03:57:07.879283    1912 system_pods.go:61] "kube-proxy-xmhw9" [94142ec5-c850-4cea-8eb1-2f6f78c30c0e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 03:57:07.879285    1912 system_pods.go:61] "kube-scheduler-functional-740000" [bcec002f-f589-4db4-be22-fc7de65ebb6f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0911 03:57:07.879287    1912 system_pods.go:61] "storage-provisioner" [bb69cc6c-d468-4340-92f4-8386dbe0fa68] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 03:57:07.879289    1912 system_pods.go:74] duration metric: took 4.644709ms to wait for pod list to return data ...
	I0911 03:57:07.879291    1912 node_conditions.go:102] verifying NodePressure condition ...
	I0911 03:57:07.880818    1912 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0911 03:57:07.880824    1912 node_conditions.go:123] node cpu capacity is 2
	I0911 03:57:07.880829    1912 node_conditions.go:105] duration metric: took 1.535959ms to run NodePressure ...
	I0911 03:57:07.880835    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 03:57:07.970196    1912 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0911 03:57:07.972627    1912 kubeadm.go:787] kubelet initialised
	I0911 03:57:07.972631    1912 kubeadm.go:788] duration metric: took 2.429208ms waiting for restarted kubelet to initialise ...
	I0911 03:57:07.972635    1912 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 03:57:07.975393    1912 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-cshzx" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:09.983905    1912 pod_ready.go:92] pod "coredns-5dd5756b68-cshzx" in "kube-system" namespace has status "Ready":"True"
	I0911 03:57:09.983910    1912 pod_ready.go:81] duration metric: took 2.008564125s waiting for pod "coredns-5dd5756b68-cshzx" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:09.983915    1912 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:11.994060    1912 pod_ready.go:102] pod "etcd-functional-740000" in "kube-system" namespace has status "Ready":"False"
	I0911 03:57:14.493054    1912 pod_ready.go:102] pod "etcd-functional-740000" in "kube-system" namespace has status "Ready":"False"
	I0911 03:57:16.493241    1912 pod_ready.go:102] pod "etcd-functional-740000" in "kube-system" namespace has status "Ready":"False"
	I0911 03:57:18.993074    1912 pod_ready.go:102] pod "etcd-functional-740000" in "kube-system" namespace has status "Ready":"False"
	I0911 03:57:19.493416    1912 pod_ready.go:92] pod "etcd-functional-740000" in "kube-system" namespace has status "Ready":"True"
	I0911 03:57:19.493423    1912 pod_ready.go:81] duration metric: took 9.509746875s waiting for pod "etcd-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:19.493428    1912 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:19.495811    1912 pod_ready.go:92] pod "kube-apiserver-functional-740000" in "kube-system" namespace has status "Ready":"True"
	I0911 03:57:19.495814    1912 pod_ready.go:81] duration metric: took 2.383167ms waiting for pod "kube-apiserver-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:19.495817    1912 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:21.504785    1912 pod_ready.go:102] pod "kube-controller-manager-functional-740000" in "kube-system" namespace has status "Ready":"False"
	I0911 03:57:22.505355    1912 pod_ready.go:92] pod "kube-controller-manager-functional-740000" in "kube-system" namespace has status "Ready":"True"
	I0911 03:57:22.505361    1912 pod_ready.go:81] duration metric: took 3.009617416s waiting for pod "kube-controller-manager-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:22.505365    1912 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xmhw9" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:22.507784    1912 pod_ready.go:92] pod "kube-proxy-xmhw9" in "kube-system" namespace has status "Ready":"True"
	I0911 03:57:22.507789    1912 pod_ready.go:81] duration metric: took 2.421959ms waiting for pod "kube-proxy-xmhw9" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:22.507792    1912 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:22.510053    1912 pod_ready.go:92] pod "kube-scheduler-functional-740000" in "kube-system" namespace has status "Ready":"True"
	I0911 03:57:22.510056    1912 pod_ready.go:81] duration metric: took 2.262125ms waiting for pod "kube-scheduler-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:22.510068    1912 pod_ready.go:38] duration metric: took 14.537790375s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 03:57:22.510075    1912 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 03:57:22.513842    1912 ops.go:34] apiserver oom_adj: -16
	I0911 03:57:22.513852    1912 kubeadm.go:640] restartCluster took 19.84184925s
	I0911 03:57:22.513854    1912 kubeadm.go:406] StartCluster complete in 19.850752333s
	I0911 03:57:22.513861    1912 settings.go:142] acquiring lock: {Name:mk1469232b3abbdcc69ed77e286fb2789adb44fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:57:22.513951    1912 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 03:57:22.514271    1912 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/kubeconfig: {Name:mk8b43c711db1489632c69fe978a061a5dcf6734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:57:22.514508    1912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 03:57:22.514548    1912 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 03:57:22.514582    1912 addons.go:69] Setting storage-provisioner=true in profile "functional-740000"
	I0911 03:57:22.514585    1912 addons.go:69] Setting default-storageclass=true in profile "functional-740000"
	I0911 03:57:22.514588    1912 addons.go:231] Setting addon storage-provisioner=true in "functional-740000"
	W0911 03:57:22.514591    1912 addons.go:240] addon storage-provisioner should already be in state true
	I0911 03:57:22.514591    1912 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-740000"
	I0911 03:57:22.514604    1912 config.go:182] Loaded profile config "functional-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 03:57:22.514616    1912 host.go:66] Checking if "functional-740000" exists ...
	I0911 03:57:22.520608    1912 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 03:57:22.523546    1912 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 03:57:22.523550    1912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 03:57:22.523557    1912 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/functional-740000/id_rsa Username:docker}
	I0911 03:57:22.524001    1912 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-740000" context rescaled to 1 replicas
	I0911 03:57:22.524012    1912 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 03:57:22.527554    1912 out.go:177] * Verifying Kubernetes components...
	I0911 03:57:22.526050    1912 addons.go:231] Setting addon default-storageclass=true in "functional-740000"
	W0911 03:57:22.533493    1912 addons.go:240] addon default-storageclass should already be in state true
	I0911 03:57:22.533507    1912 host.go:66] Checking if "functional-740000" exists ...
	I0911 03:57:22.533534    1912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 03:57:22.534216    1912 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 03:57:22.534219    1912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 03:57:22.534225    1912 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/functional-740000/id_rsa Username:docker}
	I0911 03:57:22.557658    1912 node_ready.go:35] waiting up to 6m0s for node "functional-740000" to be "Ready" ...
	I0911 03:57:22.557673    1912 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0911 03:57:22.559546    1912 node_ready.go:49] node "functional-740000" has status "Ready":"True"
	I0911 03:57:22.559556    1912 node_ready.go:38] duration metric: took 1.88125ms waiting for node "functional-740000" to be "Ready" ...
	I0911 03:57:22.559559    1912 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 03:57:22.564510    1912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 03:57:22.594466    1912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 03:57:22.695217    1912 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cshzx" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:22.902791    1912 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0911 03:57:22.910824    1912 addons.go:502] enable addons completed in 396.312375ms: enabled=[storage-provisioner default-storageclass]
	I0911 03:57:23.093786    1912 pod_ready.go:92] pod "coredns-5dd5756b68-cshzx" in "kube-system" namespace has status "Ready":"True"
	I0911 03:57:23.093791    1912 pod_ready.go:81] duration metric: took 398.578958ms waiting for pod "coredns-5dd5756b68-cshzx" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:23.093796    1912 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:23.493558    1912 pod_ready.go:92] pod "etcd-functional-740000" in "kube-system" namespace has status "Ready":"True"
	I0911 03:57:23.493563    1912 pod_ready.go:81] duration metric: took 399.774958ms waiting for pod "etcd-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:23.493567    1912 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:23.892679    1912 pod_ready.go:92] pod "kube-apiserver-functional-740000" in "kube-system" namespace has status "Ready":"True"
	I0911 03:57:23.892684    1912 pod_ready.go:81] duration metric: took 399.124917ms waiting for pod "kube-apiserver-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:23.892688    1912 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:24.293304    1912 pod_ready.go:92] pod "kube-controller-manager-functional-740000" in "kube-system" namespace has status "Ready":"True"
	I0911 03:57:24.293308    1912 pod_ready.go:81] duration metric: took 400.628459ms waiting for pod "kube-controller-manager-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:24.293334    1912 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xmhw9" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:24.693836    1912 pod_ready.go:92] pod "kube-proxy-xmhw9" in "kube-system" namespace has status "Ready":"True"
	I0911 03:57:24.693841    1912 pod_ready.go:81] duration metric: took 400.515084ms waiting for pod "kube-proxy-xmhw9" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:24.693846    1912 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:25.093582    1912 pod_ready.go:92] pod "kube-scheduler-functional-740000" in "kube-system" namespace has status "Ready":"True"
	I0911 03:57:25.093589    1912 pod_ready.go:81] duration metric: took 399.749666ms waiting for pod "kube-scheduler-functional-740000" in "kube-system" namespace to be "Ready" ...
	I0911 03:57:25.093593    1912 pod_ready.go:38] duration metric: took 2.534094459s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 03:57:25.093604    1912 api_server.go:52] waiting for apiserver process to appear ...
	I0911 03:57:25.093688    1912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 03:57:25.098145    1912 api_server.go:72] duration metric: took 2.574189708s to wait for apiserver process to appear ...
	I0911 03:57:25.098149    1912 api_server.go:88] waiting for apiserver healthz status ...
	I0911 03:57:25.098155    1912 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0911 03:57:25.101291    1912 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0911 03:57:25.101932    1912 api_server.go:141] control plane version: v1.28.1
	I0911 03:57:25.101935    1912 api_server.go:131] duration metric: took 3.784709ms to wait for apiserver health ...
	I0911 03:57:25.101937    1912 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 03:57:25.295410    1912 system_pods.go:59] 7 kube-system pods found
	I0911 03:57:25.295416    1912 system_pods.go:61] "coredns-5dd5756b68-cshzx" [fab96eef-4c97-42a0-82f6-3f6404f4b9c8] Running
	I0911 03:57:25.295418    1912 system_pods.go:61] "etcd-functional-740000" [11139528-a46e-44fa-b56c-83024d6ed373] Running
	I0911 03:57:25.295420    1912 system_pods.go:61] "kube-apiserver-functional-740000" [c1bdad66-92ee-4902-b51d-244ddadb89a4] Running
	I0911 03:57:25.295422    1912 system_pods.go:61] "kube-controller-manager-functional-740000" [41b2f85e-a8ca-46a2-abbd-54e8354cc183] Running
	I0911 03:57:25.295424    1912 system_pods.go:61] "kube-proxy-xmhw9" [94142ec5-c850-4cea-8eb1-2f6f78c30c0e] Running
	I0911 03:57:25.295425    1912 system_pods.go:61] "kube-scheduler-functional-740000" [bcec002f-f589-4db4-be22-fc7de65ebb6f] Running
	I0911 03:57:25.295427    1912 system_pods.go:61] "storage-provisioner" [bb69cc6c-d468-4340-92f4-8386dbe0fa68] Running
	I0911 03:57:25.295429    1912 system_pods.go:74] duration metric: took 193.495042ms to wait for pod list to return data ...
	I0911 03:57:25.295432    1912 default_sa.go:34] waiting for default service account to be created ...
	I0911 03:57:25.493573    1912 default_sa.go:45] found service account: "default"
	I0911 03:57:25.493578    1912 default_sa.go:55] duration metric: took 198.149625ms for default service account to be created ...
	I0911 03:57:25.493581    1912 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 03:57:25.694255    1912 system_pods.go:86] 7 kube-system pods found
	I0911 03:57:25.694261    1912 system_pods.go:89] "coredns-5dd5756b68-cshzx" [fab96eef-4c97-42a0-82f6-3f6404f4b9c8] Running
	I0911 03:57:25.694264    1912 system_pods.go:89] "etcd-functional-740000" [11139528-a46e-44fa-b56c-83024d6ed373] Running
	I0911 03:57:25.694266    1912 system_pods.go:89] "kube-apiserver-functional-740000" [c1bdad66-92ee-4902-b51d-244ddadb89a4] Running
	I0911 03:57:25.694268    1912 system_pods.go:89] "kube-controller-manager-functional-740000" [41b2f85e-a8ca-46a2-abbd-54e8354cc183] Running
	I0911 03:57:25.694270    1912 system_pods.go:89] "kube-proxy-xmhw9" [94142ec5-c850-4cea-8eb1-2f6f78c30c0e] Running
	I0911 03:57:25.694272    1912 system_pods.go:89] "kube-scheduler-functional-740000" [bcec002f-f589-4db4-be22-fc7de65ebb6f] Running
	I0911 03:57:25.694273    1912 system_pods.go:89] "storage-provisioner" [bb69cc6c-d468-4340-92f4-8386dbe0fa68] Running
	I0911 03:57:25.694275    1912 system_pods.go:126] duration metric: took 200.698209ms to wait for k8s-apps to be running ...
	I0911 03:57:25.694277    1912 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 03:57:25.694328    1912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 03:57:25.699330    1912 system_svc.go:56] duration metric: took 5.049792ms WaitForService to wait for kubelet.
	I0911 03:57:25.699334    1912 kubeadm.go:581] duration metric: took 3.175394666s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 03:57:25.699342    1912 node_conditions.go:102] verifying NodePressure condition ...
	I0911 03:57:25.893622    1912 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0911 03:57:25.893629    1912 node_conditions.go:123] node cpu capacity is 2
	I0911 03:57:25.893634    1912 node_conditions.go:105] duration metric: took 194.294958ms to run NodePressure ...
	I0911 03:57:25.893639    1912 start.go:228] waiting for startup goroutines ...
	I0911 03:57:25.893641    1912 start.go:233] waiting for cluster config update ...
	I0911 03:57:25.893645    1912 start.go:242] writing updated cluster config ...
	I0911 03:57:25.893970    1912 ssh_runner.go:195] Run: rm -f paused
	I0911 03:57:25.922985    1912 start.go:600] kubectl: 1.27.2, cluster: 1.28.1 (minor skew: 1)
	I0911 03:57:25.927973    1912 out.go:177] * Done! kubectl is now configured to use "functional-740000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-11 10:55:34 UTC, ends at Mon 2023-09-11 10:57:31 UTC. --
	Sep 11 10:57:08 functional-740000 dockerd[6622]: time="2023-09-11T10:57:08.128485651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 11 10:57:08 functional-740000 dockerd[6622]: time="2023-09-11T10:57:08.128515692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 10:57:08 functional-740000 dockerd[6622]: time="2023-09-11T10:57:08.128540900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 11 10:57:08 functional-740000 dockerd[6622]: time="2023-09-11T10:57:08.128547816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 10:57:08 functional-740000 cri-dockerd[6879]: time="2023-09-11T10:57:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/852699c4a28808ceec998c88491eb3ec7906c1725b927cb3d483e29a56539768/resolv.conf as [nameserver 192.168.105.1]"
	Sep 11 10:57:08 functional-740000 dockerd[6622]: time="2023-09-11T10:57:08.217118637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 11 10:57:08 functional-740000 dockerd[6622]: time="2023-09-11T10:57:08.217160511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 10:57:08 functional-740000 dockerd[6622]: time="2023-09-11T10:57:08.217167136Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 11 10:57:08 functional-740000 dockerd[6622]: time="2023-09-11T10:57:08.217171427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 10:57:08 functional-740000 cri-dockerd[6879]: time="2023-09-11T10:57:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5c8e576d6bf3e9cc9eb13a122c6e25135f63c5001b32806ac8c7a60ff523007c/resolv.conf as [nameserver 192.168.105.1]"
	Sep 11 10:57:08 functional-740000 dockerd[6622]: time="2023-09-11T10:57:08.292326110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 11 10:57:08 functional-740000 dockerd[6622]: time="2023-09-11T10:57:08.292359526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 10:57:08 functional-740000 dockerd[6622]: time="2023-09-11T10:57:08.292368942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 11 10:57:08 functional-740000 dockerd[6622]: time="2023-09-11T10:57:08.292375108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 10:57:27 functional-740000 dockerd[6622]: time="2023-09-11T10:57:27.720978781Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 11 10:57:27 functional-740000 dockerd[6622]: time="2023-09-11T10:57:27.721017113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 10:57:27 functional-740000 dockerd[6622]: time="2023-09-11T10:57:27.721027446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 11 10:57:27 functional-740000 dockerd[6622]: time="2023-09-11T10:57:27.721039613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 10:57:27 functional-740000 cri-dockerd[6879]: time="2023-09-11T10:57:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c4da6aa5fcf82e4b31c4ddb45460c15e677b00e90ad05a698793df1d78241c50/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Sep 11 10:57:28 functional-740000 dockerd[6616]: time="2023-09-11T10:57:28.728755426Z" level=error msg="Not continuing with pull after error: errors:\ndenied: requested access to the resource is denied\nunauthorized: authentication required\n"
	Sep 11 10:57:28 functional-740000 dockerd[6616]: time="2023-09-11T10:57:28.728779926Z" level=info msg="Ignoring extra error returned from registry" error="unauthorized: authentication required"
	Sep 11 10:57:30 functional-740000 dockerd[6622]: time="2023-09-11T10:57:30.596582085Z" level=info msg="shim disconnected" id=c4da6aa5fcf82e4b31c4ddb45460c15e677b00e90ad05a698793df1d78241c50 namespace=moby
	Sep 11 10:57:30 functional-740000 dockerd[6616]: time="2023-09-11T10:57:30.596793040Z" level=info msg="ignoring event" container=c4da6aa5fcf82e4b31c4ddb45460c15e677b00e90ad05a698793df1d78241c50 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 11 10:57:30 functional-740000 dockerd[6622]: time="2023-09-11T10:57:30.596927288Z" level=warning msg="cleaning up after shim disconnected" id=c4da6aa5fcf82e4b31c4ddb45460c15e677b00e90ad05a698793df1d78241c50 namespace=moby
	Sep 11 10:57:30 functional-740000 dockerd[6622]: time="2023-09-11T10:57:30.596951537Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	30d6269da315e       97e04611ad434       23 seconds ago       Running             coredns                   2                   5c8e576d6bf3e
	6ced62e735a2c       ba04bb24b9575       23 seconds ago       Running             storage-provisioner       3                   852699c4a2880
	de4a10dbf990e       812f5241df7fd       23 seconds ago       Running             kube-proxy                2                   c509fa8239989
	3713dda03afbe       9cdd6470f48c8       27 seconds ago       Running             etcd                      2                   5ffcbb72f80b3
	d5ce1ab54e283       b29fb62480892       27 seconds ago       Running             kube-apiserver            0                   513830cbf8e78
	023d1ba072cbe       8b6e1980b7584       27 seconds ago       Running             kube-controller-manager   2                   09e281baef59c
	e347e144afa51       b4a5a57e99492       27 seconds ago       Running             kube-scheduler            2                   db9cb7dc6189f
	9fd9bdc0350e1       ba04bb24b9575       51 seconds ago       Exited              storage-provisioner       2                   94e5338bb00d0
	0667be72cc803       97e04611ad434       About a minute ago   Exited              coredns                   1                   e08dd8884bdcf
	2c9ee88482e3f       9cdd6470f48c8       About a minute ago   Exited              etcd                      1                   a871e5c40f159
	6d788d6a9687a       b29fb62480892       About a minute ago   Exited              kube-apiserver            1                   2c3721e9302f4
	fa4547b4e52ec       b4a5a57e99492       About a minute ago   Exited              kube-scheduler            1                   6acb173901ae8
	b10509d704c0e       8b6e1980b7584       About a minute ago   Exited              kube-controller-manager   1                   db73d6546d4a5
	677f73db20759       812f5241df7fd       About a minute ago   Exited              kube-proxy                1                   a4c7af6f9e070
	
	* 
	* ==> coredns [0667be72cc80] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34865 - 4202 "HINFO IN 5628587765548081682.7266691883950973552. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004921693s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [30d6269da315] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50356 - 33523 "HINFO IN 8020413452089812584.2274111581267159359. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004478005s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-740000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-740000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=functional-740000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T03_55_51_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 10:55:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-740000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 10:57:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 10:57:06 +0000   Mon, 11 Sep 2023 10:55:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 10:57:06 +0000   Mon, 11 Sep 2023 10:55:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 10:57:06 +0000   Mon, 11 Sep 2023 10:55:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 10:57:06 +0000   Mon, 11 Sep 2023 10:55:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-740000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 fc4e1b5ddc2b43169e12cb4be28b15ea
	  System UUID:                fc4e1b5ddc2b43169e12cb4be28b15ea
	  Boot ID:                    16b08f7b-fa6a-4d0e-b063-3a9bda515c0e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.5
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         0s
	  kube-system                 coredns-5dd5756b68-cshzx                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     87s
	  kube-system                 etcd-functional-740000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         100s
	  kube-system                 kube-apiserver-functional-740000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24s
	  kube-system                 kube-controller-manager-functional-740000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 kube-proxy-xmhw9                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-scheduler-functional-740000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 86s                kube-proxy       
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 65s                kube-proxy       
	  Normal  Starting                 100s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  100s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  100s               kubelet          Node functional-740000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    100s               kubelet          Node functional-740000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s               kubelet          Node functional-740000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                97s                kubelet          Node functional-740000 status is now: NodeReady
	  Normal  RegisteredNode           88s                node-controller  Node functional-740000 event: Registered Node functional-740000 in Controller
	  Normal  RegisteredNode           53s                node-controller  Node functional-740000 event: Registered Node functional-740000 in Controller
	  Normal  Starting                 28s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s (x8 over 28s)  kubelet          Node functional-740000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s (x8 over 28s)  kubelet          Node functional-740000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s (x7 over 28s)  kubelet          Node functional-740000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13s                node-controller  Node functional-740000 event: Registered Node functional-740000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.278104] kauditd_printk_skb: 68 callbacks suppressed
	[  +4.352842] systemd-fstab-generator[2289]: Ignoring "noauto" for root device
	[Sep11 10:56] kauditd_printk_skb: 2 callbacks suppressed
	[  +4.124796] systemd-fstab-generator[3661]: Ignoring "noauto" for root device
	[  +0.137069] systemd-fstab-generator[3694]: Ignoring "noauto" for root device
	[  +0.087733] systemd-fstab-generator[3705]: Ignoring "noauto" for root device
	[  +0.092090] systemd-fstab-generator[3718]: Ignoring "noauto" for root device
	[  +5.157788] kauditd_printk_skb: 25 callbacks suppressed
	[  +6.252877] systemd-fstab-generator[4288]: Ignoring "noauto" for root device
	[  +0.064699] systemd-fstab-generator[4299]: Ignoring "noauto" for root device
	[  +0.064529] systemd-fstab-generator[4310]: Ignoring "noauto" for root device
	[  +0.062971] systemd-fstab-generator[4321]: Ignoring "noauto" for root device
	[  +0.099579] systemd-fstab-generator[4394]: Ignoring "noauto" for root device
	[  +5.100979] kauditd_printk_skb: 34 callbacks suppressed
	[ +24.203098] systemd-fstab-generator[6153]: Ignoring "noauto" for root device
	[  +0.137048] systemd-fstab-generator[6186]: Ignoring "noauto" for root device
	[  +0.078455] systemd-fstab-generator[6197]: Ignoring "noauto" for root device
	[  +0.097942] systemd-fstab-generator[6210]: Ignoring "noauto" for root device
	[Sep11 10:57] systemd-fstab-generator[6767]: Ignoring "noauto" for root device
	[  +0.080457] systemd-fstab-generator[6778]: Ignoring "noauto" for root device
	[  +0.063525] systemd-fstab-generator[6789]: Ignoring "noauto" for root device
	[  +0.063583] systemd-fstab-generator[6800]: Ignoring "noauto" for root device
	[  +0.072071] systemd-fstab-generator[6863]: Ignoring "noauto" for root device
	[  +1.139670] systemd-fstab-generator[7117]: Ignoring "noauto" for root device
	[  +4.651979] kauditd_printk_skb: 29 callbacks suppressed
	
	* 
	* ==> etcd [2c9ee88482e3] <==
	* {"level":"info","ts":"2023-09-11T10:56:23.867217Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T10:56:25.421844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-11T10:56:25.421979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-11T10:56:25.422053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-09-11T10:56:25.422124Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-09-11T10:56:25.422164Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-11T10:56:25.422236Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-09-11T10:56:25.42229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-11T10:56:25.425186Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-740000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-11T10:56:25.425204Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T10:56:25.425258Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T10:56:25.428733Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-11T10:56:25.43012Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-11T10:56:25.430184Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-11T10:56:25.431107Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-09-11T10:56:50.787114Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-11T10:56:50.787141Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-740000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2023-09-11T10:56:50.787192Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-11T10:56:50.787233Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-11T10:56:50.796118Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-11T10:56:50.796136Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-11T10:56:50.796165Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-09-11T10:56:50.797735Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-11T10:56:50.797764Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-11T10:56:50.797768Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-740000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> etcd [3713dda03afb] <==
	* {"level":"info","ts":"2023-09-11T10:57:04.636305Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T10:57:04.636326Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T10:57:04.636444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-09-11T10:57:04.636538Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-09-11T10:57:04.636585Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T10:57:04.636696Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T10:57:04.638193Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-11T10:57:04.639926Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-11T10:57:04.64Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-09-11T10:57:04.640241Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-11T10:57:04.640266Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-11T10:57:05.800654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-09-11T10:57:05.800765Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-09-11T10:57:05.800796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-09-11T10:57:05.800821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-09-11T10:57:05.800837Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-09-11T10:57:05.800855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-09-11T10:57:05.800871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-09-11T10:57:05.803984Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-740000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-11T10:57:05.80405Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T10:57:05.805733Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-11T10:57:05.806069Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T10:57:05.807517Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-09-11T10:57:05.817649Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-11T10:57:05.817675Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  10:57:31 up 1 min,  0 users,  load average: 0.45, 0.19, 0.07
	Linux functional-740000 5.10.57 #1 SMP PREEMPT Thu Aug 24 12:01:08 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [6d788d6a9687] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0911 10:57:00.746639       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0911 10:57:00.756117       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0911 10:57:00.769988       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [d5ce1ab54e28] <==
	* I0911 10:57:06.376281       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0911 10:57:06.421979       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0911 10:57:06.456805       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0911 10:57:06.473819       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0911 10:57:06.473826       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0911 10:57:06.474075       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0911 10:57:06.475606       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0911 10:57:06.475630       1 aggregator.go:166] initial CRD sync complete...
	I0911 10:57:06.475638       1 autoregister_controller.go:141] Starting autoregister controller
	I0911 10:57:06.475645       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0911 10:57:06.475656       1 cache.go:39] Caches are synced for autoregister controller
	E0911 10:57:06.475689       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0911 10:57:06.476001       1 shared_informer.go:318] Caches are synced for configmaps
	I0911 10:57:06.476031       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0911 10:57:06.476336       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0911 10:57:07.375297       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0911 10:57:07.996752       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0911 10:57:07.999989       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0911 10:57:08.013483       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0911 10:57:08.022186       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0911 10:57:08.024600       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0911 10:57:18.726792       1 controller.go:624] quota admission added evaluator for: endpoints
	I0911 10:57:18.927185       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0911 10:57:27.387699       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.185.17"}
	I0911 10:57:31.581679       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.97.96.178"}
	
	* 
	* ==> kube-controller-manager [023d1ba072cb] <==
	* I0911 10:57:18.724298       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-740000"
	I0911 10:57:18.724035       1 shared_informer.go:318] Caches are synced for ephemeral
	I0911 10:57:18.724131       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0911 10:57:18.724385       1 taint_manager.go:211] "Sending events to api server"
	I0911 10:57:18.724263       1 event.go:307] "Event occurred" object="functional-740000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-740000 event: Registered Node functional-740000 in Controller"
	I0911 10:57:18.724489       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0911 10:57:18.730081       1 shared_informer.go:318] Caches are synced for node
	I0911 10:57:18.730127       1 range_allocator.go:174] "Sending events to api server"
	I0911 10:57:18.730139       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0911 10:57:18.730141       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0911 10:57:18.730143       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0911 10:57:18.731304       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0911 10:57:18.731857       1 shared_informer.go:318] Caches are synced for crt configmap
	I0911 10:57:18.738899       1 shared_informer.go:318] Caches are synced for disruption
	I0911 10:57:18.748191       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0911 10:57:18.821615       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0911 10:57:18.856442       1 shared_informer.go:318] Caches are synced for resource quota
	I0911 10:57:18.861686       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0911 10:57:18.863870       1 shared_informer.go:318] Caches are synced for job
	I0911 10:57:18.910637       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0911 10:57:18.921259       1 shared_informer.go:318] Caches are synced for cronjob
	I0911 10:57:18.925655       1 shared_informer.go:318] Caches are synced for resource quota
	I0911 10:57:19.242274       1 shared_informer.go:318] Caches are synced for garbage collector
	I0911 10:57:19.263583       1 shared_informer.go:318] Caches are synced for garbage collector
	I0911 10:57:19.263596       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [b10509d704c0] <==
	* I0911 10:56:38.400850       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0911 10:56:38.412362       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0911 10:56:38.418551       1 shared_informer.go:318] Caches are synced for node
	I0911 10:56:38.418587       1 range_allocator.go:174] "Sending events to api server"
	I0911 10:56:38.418604       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0911 10:56:38.418606       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0911 10:56:38.418609       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0911 10:56:38.419623       1 shared_informer.go:318] Caches are synced for persistent volume
	I0911 10:56:38.420738       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0911 10:56:38.421811       1 shared_informer.go:318] Caches are synced for GC
	I0911 10:56:38.429801       1 shared_informer.go:318] Caches are synced for PVC protection
	I0911 10:56:38.430877       1 shared_informer.go:318] Caches are synced for PV protection
	I0911 10:56:38.432062       1 shared_informer.go:318] Caches are synced for expand
	I0911 10:56:38.433135       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0911 10:56:38.487676       1 shared_informer.go:318] Caches are synced for deployment
	I0911 10:56:38.490881       1 shared_informer.go:318] Caches are synced for HPA
	I0911 10:56:38.521273       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0911 10:56:38.521339       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="23.414µs"
	I0911 10:56:38.580583       1 shared_informer.go:318] Caches are synced for disruption
	I0911 10:56:38.628675       1 shared_informer.go:318] Caches are synced for resource quota
	I0911 10:56:38.631863       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0911 10:56:38.635135       1 shared_informer.go:318] Caches are synced for resource quota
	I0911 10:56:38.958857       1 shared_informer.go:318] Caches are synced for garbage collector
	I0911 10:56:38.963845       1 shared_informer.go:318] Caches are synced for garbage collector
	I0911 10:56:38.963861       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [677f73db2075] <==
	* I0911 10:56:23.416459       1 server_others.go:69] "Using iptables proxy"
	E0911 10:56:23.417312       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-740000": dial tcp 192.168.105.4:8441: connect: connection refused
	I0911 10:56:26.075901       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0911 10:56:26.098024       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0911 10:56:26.098041       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0911 10:56:26.098806       1 server_others.go:152] "Using iptables Proxier"
	I0911 10:56:26.098828       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0911 10:56:26.098901       1 server.go:846] "Version info" version="v1.28.1"
	I0911 10:56:26.098911       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 10:56:26.099394       1 config.go:188] "Starting service config controller"
	I0911 10:56:26.099407       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0911 10:56:26.099414       1 config.go:97] "Starting endpoint slice config controller"
	I0911 10:56:26.099416       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0911 10:56:26.099530       1 config.go:315] "Starting node config controller"
	I0911 10:56:26.099536       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0911 10:56:26.199947       1 shared_informer.go:318] Caches are synced for node config
	I0911 10:56:26.199952       1 shared_informer.go:318] Caches are synced for service config
	I0911 10:56:26.199958       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [de4a10dbf990] <==
	* I0911 10:57:08.251984       1 server_others.go:69] "Using iptables proxy"
	I0911 10:57:08.257734       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0911 10:57:08.271675       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0911 10:57:08.271722       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0911 10:57:08.272355       1 server_others.go:152] "Using iptables Proxier"
	I0911 10:57:08.272374       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0911 10:57:08.272439       1 server.go:846] "Version info" version="v1.28.1"
	I0911 10:57:08.272442       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 10:57:08.273179       1 config.go:188] "Starting service config controller"
	I0911 10:57:08.273187       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0911 10:57:08.273201       1 config.go:97] "Starting endpoint slice config controller"
	I0911 10:57:08.273203       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0911 10:57:08.275923       1 config.go:315] "Starting node config controller"
	I0911 10:57:08.275930       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0911 10:57:08.373304       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0911 10:57:08.373303       1 shared_informer.go:318] Caches are synced for service config
	I0911 10:57:08.376097       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [e347e144afa5] <==
	* I0911 10:57:04.855434       1 serving.go:348] Generated self-signed cert in-memory
	W0911 10:57:06.403277       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0911 10:57:06.403393       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0911 10:57:06.403418       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0911 10:57:06.403435       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0911 10:57:06.439325       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0911 10:57:06.439375       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 10:57:06.440509       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0911 10:57:06.440812       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0911 10:57:06.440840       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0911 10:57:06.440857       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0911 10:57:06.541111       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [fa4547b4e52e] <==
	* I0911 10:56:24.223945       1 serving.go:348] Generated self-signed cert in-memory
	W0911 10:56:26.054661       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0911 10:56:26.054735       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0911 10:56:26.054770       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0911 10:56:26.054787       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0911 10:56:26.074658       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0911 10:56:26.074805       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 10:56:26.076286       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0911 10:56:26.076381       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0911 10:56:26.076408       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0911 10:56:26.076430       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0911 10:56:26.176754       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0911 10:56:50.814508       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0911 10:56:50.814536       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0911 10:56:50.814581       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0911 10:56:50.814681       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-11 10:55:34 UTC, ends at Mon 2023-09-11 10:57:31 UTC. --
	Sep 11 10:57:07 functional-740000 kubelet[7123]: I0911 10:57:07.670330    7123 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 11 10:57:07 functional-740000 kubelet[7123]: I0911 10:57:07.671990    7123 kubelet.go:1877] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-740000"
	Sep 11 10:57:07 functional-740000 kubelet[7123]: I0911 10:57:07.686430    7123 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-740000" podStartSLOduration=0.686396622 podCreationTimestamp="2023-09-11 10:57:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-11 10:57:07.682673601 +0000 UTC m=+4.070054917" watchObservedRunningTime="2023-09-11 10:57:07.686396622 +0000 UTC m=+4.073777938"
	Sep 11 10:57:07 functional-740000 kubelet[7123]: I0911 10:57:07.700790    7123 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d5699fa42a845e440b98d1910fffb098" path="/var/lib/kubelet/pods/d5699fa42a845e440b98d1910fffb098/volumes"
	Sep 11 10:57:07 functional-740000 kubelet[7123]: I0911 10:57:07.711592    7123 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/94142ec5-c850-4cea-8eb1-2f6f78c30c0e-xtables-lock\") pod \"kube-proxy-xmhw9\" (UID: \"94142ec5-c850-4cea-8eb1-2f6f78c30c0e\") " pod="kube-system/kube-proxy-xmhw9"
	Sep 11 10:57:07 functional-740000 kubelet[7123]: I0911 10:57:07.711610    7123 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/94142ec5-c850-4cea-8eb1-2f6f78c30c0e-lib-modules\") pod \"kube-proxy-xmhw9\" (UID: \"94142ec5-c850-4cea-8eb1-2f6f78c30c0e\") " pod="kube-system/kube-proxy-xmhw9"
	Sep 11 10:57:07 functional-740000 kubelet[7123]: I0911 10:57:07.711628    7123 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bb69cc6c-d468-4340-92f4-8386dbe0fa68-tmp\") pod \"storage-provisioner\" (UID: \"bb69cc6c-d468-4340-92f4-8386dbe0fa68\") " pod="kube-system/storage-provisioner"
	Sep 11 10:57:09 functional-740000 kubelet[7123]: I0911 10:57:09.852753    7123 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 11 10:57:27 functional-740000 kubelet[7123]: I0911 10:57:27.384568    7123 topology_manager.go:215] "Topology Admit Handler" podUID="5fcf74de-351a-49e1-a16a-06391d44b8d8" podNamespace="default" podName="invalid-svc"
	Sep 11 10:57:27 functional-740000 kubelet[7123]: E0911 10:57:27.384601    7123 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5699fa42a845e440b98d1910fffb098" containerName="kube-apiserver"
	Sep 11 10:57:27 functional-740000 kubelet[7123]: E0911 10:57:27.384607    7123 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5699fa42a845e440b98d1910fffb098" containerName="kube-apiserver"
	Sep 11 10:57:27 functional-740000 kubelet[7123]: I0911 10:57:27.384620    7123 memory_manager.go:346] "RemoveStaleState removing state" podUID="d5699fa42a845e440b98d1910fffb098" containerName="kube-apiserver"
	Sep 11 10:57:27 functional-740000 kubelet[7123]: I0911 10:57:27.384624    7123 memory_manager.go:346] "RemoveStaleState removing state" podUID="d5699fa42a845e440b98d1910fffb098" containerName="kube-apiserver"
	Sep 11 10:57:27 functional-740000 kubelet[7123]: I0911 10:57:27.430439    7123 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fxc8\" (UniqueName: \"kubernetes.io/projected/5fcf74de-351a-49e1-a16a-06391d44b8d8-kube-api-access-4fxc8\") pod \"invalid-svc\" (UID: \"5fcf74de-351a-49e1-a16a-06391d44b8d8\") " pod="default/invalid-svc"
	Sep 11 10:57:28 functional-740000 kubelet[7123]: E0911 10:57:28.730240    7123 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: pull access denied for nonexistingimage, repository does not exist or may require 'docker login': denied: requested access to the resource is denied" image="nonexistingimage:latest"
	Sep 11 10:57:28 functional-740000 kubelet[7123]: E0911 10:57:28.730266    7123 kuberuntime_image.go:53] "Failed to pull image" err="Error response from daemon: pull access denied for nonexistingimage, repository does not exist or may require 'docker login': denied: requested access to the resource is denied" image="nonexistingimage:latest"
	Sep 11 10:57:28 functional-740000 kubelet[7123]: E0911 10:57:28.730355    7123 kuberuntime_manager.go:1209] container &Container{Name:nginx,Image:nonexistingimage:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4fxc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod invalid-svc_default(5fcf74de-351a-49e1-a16a-06391d44b8d8):
ErrImagePull: Error response from daemon: pull access denied for nonexistingimage, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
	Sep 11 10:57:28 functional-740000 kubelet[7123]: E0911 10:57:28.730377    7123 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ErrImagePull: \"Error response from daemon: pull access denied for nonexistingimage, repository does not exist or may require 'docker login': denied: requested access to the resource is denied\"" pod="default/invalid-svc" podUID="5fcf74de-351a-49e1-a16a-06391d44b8d8"
	Sep 11 10:57:28 functional-740000 kubelet[7123]: E0911 10:57:28.887431    7123 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"nonexistingimage:latest\\\"\"" pod="default/invalid-svc" podUID="5fcf74de-351a-49e1-a16a-06391d44b8d8"
	Sep 11 10:57:30 functional-740000 kubelet[7123]: I0911 10:57:30.647942    7123 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fxc8\" (UniqueName: \"kubernetes.io/projected/5fcf74de-351a-49e1-a16a-06391d44b8d8-kube-api-access-4fxc8\") pod \"5fcf74de-351a-49e1-a16a-06391d44b8d8\" (UID: \"5fcf74de-351a-49e1-a16a-06391d44b8d8\") "
	Sep 11 10:57:30 functional-740000 kubelet[7123]: I0911 10:57:30.649936    7123 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fcf74de-351a-49e1-a16a-06391d44b8d8-kube-api-access-4fxc8" (OuterVolumeSpecName: "kube-api-access-4fxc8") pod "5fcf74de-351a-49e1-a16a-06391d44b8d8" (UID: "5fcf74de-351a-49e1-a16a-06391d44b8d8"). InnerVolumeSpecName "kube-api-access-4fxc8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 11 10:57:30 functional-740000 kubelet[7123]: I0911 10:57:30.750178    7123 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-4fxc8\" (UniqueName: \"kubernetes.io/projected/5fcf74de-351a-49e1-a16a-06391d44b8d8-kube-api-access-4fxc8\") on node \"functional-740000\" DevicePath \"\""
	Sep 11 10:57:31 functional-740000 kubelet[7123]: I0911 10:57:31.566521    7123 topology_manager.go:215] "Topology Admit Handler" podUID="fd4f18bd-ef4f-4b9f-894b-bfa926cb8358" podNamespace="default" podName="nginx-svc"
	Sep 11 10:57:31 functional-740000 kubelet[7123]: I0911 10:57:31.656095    7123 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcbwj\" (UniqueName: \"kubernetes.io/projected/fd4f18bd-ef4f-4b9f-894b-bfa926cb8358-kube-api-access-dcbwj\") pod \"nginx-svc\" (UID: \"fd4f18bd-ef4f-4b9f-894b-bfa926cb8358\") " pod="default/nginx-svc"
	Sep 11 10:57:31 functional-740000 kubelet[7123]: I0911 10:57:31.701079    7123 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5fcf74de-351a-49e1-a16a-06391d44b8d8" path="/var/lib/kubelet/pods/5fcf74de-351a-49e1-a16a-06391d44b8d8/volumes"
	
	* 
	* ==> storage-provisioner [6ced62e735a2] <==
	* I0911 10:57:08.284461       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0911 10:57:08.290344       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0911 10:57:08.290874       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0911 10:57:25.682518       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0911 10:57:25.682577       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-740000_8adb2fb2-fdb3-4d85-8621-b557ecb640c1!
	I0911 10:57:25.682906       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4cedab41-1b9f-428c-9666-e3b5ac5e696e", APIVersion:"v1", ResourceVersion:"569", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-740000_8adb2fb2-fdb3-4d85-8621-b557ecb640c1 became leader
	I0911 10:57:25.783642       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-740000_8adb2fb2-fdb3-4d85-8621-b557ecb640c1!
	
	* 
	* ==> storage-provisioner [9fd9bdc0350e] <==
	* I0911 10:56:40.222078       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0911 10:56:40.227055       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0911 10:56:40.227075       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-740000 -n functional-740000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-740000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx-svc
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/SSHCmd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-740000 describe pod nginx-svc
helpers_test.go:282: (dbg) kubectl --context functional-740000 describe pod nginx-svc:

                                                
                                                
-- stdout --
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-740000/192.168.105.4
	Start Time:       Mon, 11 Sep 2023 03:57:31 -0700
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dcbwj (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  kube-api-access-dcbwj:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  0s    default-scheduler  Successfully assigned default/nginx-svc to functional-740000
	  Normal  Pulling    0s    kubelet            Pulling image "docker.io/nginx:alpine"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/SSHCmd FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/SSHCmd (1.13s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-012000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-012000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 43bcd43d3b73
	Removing intermediate container 43bcd43d3b73
	 ---> 9dab4007f4ee
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 2184ffa90e36
	Removing intermediate container 2184ffa90e36
	 ---> 790ed77264ea
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 6589292ad3e9
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-012000 -n image-012000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-012000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command   |                   Args                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh        | functional-740000 ssh findmnt            | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|            | -T /mount1                               |                   |         |         |                     |                     |
	| ssh        | functional-740000 ssh findmnt            | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT |                     |
	|            | -T /mount2                               |                   |         |         |                     |                     |
	| ssh        | functional-740000 ssh findmnt            | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|            | -T /mount1                               |                   |         |         |                     |                     |
	| ssh        | functional-740000 ssh findmnt            | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT |                     |
	|            | -T /mount2                               |                   |         |         |                     |                     |
	| license    |                                          | minikube          | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	| ssh        | functional-740000 ssh findmnt            | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|            | -T /mount1                               |                   |         |         |                     |                     |
	| ssh        | functional-740000 ssh findmnt            | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT |                     |
	|            | -T /mount2                               |                   |         |         |                     |                     |
	| ssh        | functional-740000 ssh sudo               | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT |                     |
	|            | systemctl is-active crio                 |                   |         |         |                     |                     |
	| ssh        | functional-740000 ssh sudo cat           | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|            | /etc/ssl/certs/1565.pem                  |                   |         |         |                     |                     |
	| ssh        | functional-740000 ssh sudo cat           | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|            | /usr/share/ca-certificates/1565.pem      |                   |         |         |                     |                     |
	| ssh        | functional-740000 ssh sudo cat           | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|            | /etc/ssl/certs/51391683.0                |                   |         |         |                     |                     |
	| ssh        | functional-740000 ssh sudo cat           | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|            | /etc/ssl/certs/15652.pem                 |                   |         |         |                     |                     |
	| ssh        | functional-740000 ssh sudo cat           | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|            | /usr/share/ca-certificates/15652.pem     |                   |         |         |                     |                     |
	| ssh        | functional-740000 ssh sudo cat           | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|            | /etc/ssl/certs/3ec20f2e.0                |                   |         |         |                     |                     |
	| docker-env | functional-740000 docker-env             | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	| docker-env | functional-740000 docker-env             | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	| ssh        | functional-740000 ssh pgrep              | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT |                     |
	|            | buildkitd                                |                   |         |         |                     |                     |
	| image      | functional-740000                        | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|            | image ls --format json                   |                   |         |         |                     |                     |
	|            | --alsologtostderr                        |                   |         |         |                     |                     |
	| image      | functional-740000 image build -t         | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|            | localhost/my-image:functional-740000     |                   |         |         |                     |                     |
	|            | testdata/build --alsologtostderr         |                   |         |         |                     |                     |
	| image      | functional-740000                        | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|            | image ls --format table                  |                   |         |         |                     |                     |
	|            | --alsologtostderr                        |                   |         |         |                     |                     |
	| image      | functional-740000 image ls               | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	| delete     | -p functional-740000                     | functional-740000 | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	| start      | -p image-012000 --driver=qemu2           | image-012000      | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:59 PDT |
	|            |                                          |                   |         |         |                     |                     |
	| image      | build -t aaa:latest                      | image-012000      | jenkins | v1.31.2 | 11 Sep 23 03:59 PDT | 11 Sep 23 03:59 PDT |
	|            | ./testdata/image-build/test-normal       |                   |         |         |                     |                     |
	|            | -p image-012000                          |                   |         |         |                     |                     |
	| image      | build -t aaa:latest                      | image-012000      | jenkins | v1.31.2 | 11 Sep 23 03:59 PDT | 11 Sep 23 03:59 PDT |
	|            | --build-opt=build-arg=ENV_A=test_env_str |                   |         |         |                     |                     |
	|            | --build-opt=no-cache                     |                   |         |         |                     |                     |
	|            | ./testdata/image-build/test-arg -p       |                   |         |         |                     |                     |
	|            | image-012000                             |                   |         |         |                     |                     |
	|------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 03:58:49
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 03:58:49.918566    2339 out.go:296] Setting OutFile to fd 1 ...
	I0911 03:58:49.918682    2339 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:58:49.918683    2339 out.go:309] Setting ErrFile to fd 2...
	I0911 03:58:49.918685    2339 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:58:49.918798    2339 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 03:58:49.919900    2339 out.go:303] Setting JSON to false
	I0911 03:58:49.936158    2339 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1703,"bootTime":1694428226,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 03:58:49.936211    2339 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 03:58:49.940552    2339 out.go:177] * [image-012000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 03:58:49.947610    2339 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 03:58:49.947673    2339 notify.go:220] Checking for updates...
	I0911 03:58:49.951525    2339 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 03:58:49.954520    2339 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 03:58:49.957568    2339 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 03:58:49.960558    2339 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 03:58:49.963530    2339 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 03:58:49.966664    2339 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 03:58:49.970496    2339 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 03:58:49.977552    2339 start.go:298] selected driver: qemu2
	I0911 03:58:49.977554    2339 start.go:902] validating driver "qemu2" against <nil>
	I0911 03:58:49.977560    2339 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 03:58:49.977612    2339 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 03:58:49.980498    2339 out.go:177] * Automatically selected the socket_vmnet network
	I0911 03:58:49.985774    2339 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0911 03:58:49.985855    2339 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0911 03:58:49.985871    2339 cni.go:84] Creating CNI manager for ""
	I0911 03:58:49.985876    2339 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 03:58:49.985880    2339 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 03:58:49.985887    2339 start_flags.go:321] config:
	{Name:image-012000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:image-012000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 03:58:49.990221    2339 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 03:58:49.996549    2339 out.go:177] * Starting control plane node image-012000 in cluster image-012000
	I0911 03:58:49.999543    2339 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 03:58:49.999571    2339 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 03:58:49.999586    2339 cache.go:57] Caching tarball of preloaded images
	I0911 03:58:49.999652    2339 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 03:58:49.999656    2339 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 03:58:49.999831    2339 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/config.json ...
	I0911 03:58:49.999841    2339 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/config.json: {Name:mk642552ba527f2a622d33eb497c9a5afcd550e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:58:50.000066    2339 start.go:365] acquiring machines lock for image-012000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 03:58:50.000100    2339 start.go:369] acquired machines lock for "image-012000" in 29.583µs
	I0911 03:58:50.000111    2339 start.go:93] Provisioning new machine with config: &{Name:image-012000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:image-012000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 03:58:50.000144    2339 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 03:58:50.007550    2339 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0911 03:58:50.027996    2339 start.go:159] libmachine.API.Create for "image-012000" (driver="qemu2")
	I0911 03:58:50.028019    2339 client.go:168] LocalClient.Create starting
	I0911 03:58:50.028093    2339 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 03:58:50.028118    2339 main.go:141] libmachine: Decoding PEM data...
	I0911 03:58:50.028128    2339 main.go:141] libmachine: Parsing certificate...
	I0911 03:58:50.028171    2339 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 03:58:50.028187    2339 main.go:141] libmachine: Decoding PEM data...
	I0911 03:58:50.028194    2339 main.go:141] libmachine: Parsing certificate...
	I0911 03:58:50.028509    2339 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 03:58:50.343191    2339 main.go:141] libmachine: Creating SSH key...
	I0911 03:58:50.434721    2339 main.go:141] libmachine: Creating Disk image...
	I0911 03:58:50.434724    2339 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 03:58:50.434854    2339 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/image-012000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/image-012000/disk.qcow2
	I0911 03:58:50.455127    2339 main.go:141] libmachine: STDOUT: 
	I0911 03:58:50.455138    2339 main.go:141] libmachine: STDERR: 
	I0911 03:58:50.455188    2339 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/image-012000/disk.qcow2 +20000M
	I0911 03:58:50.462336    2339 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 03:58:50.462356    2339 main.go:141] libmachine: STDERR: 
	I0911 03:58:50.462373    2339 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/image-012000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/image-012000/disk.qcow2
	I0911 03:58:50.462378    2339 main.go:141] libmachine: Starting QEMU VM...
	I0911 03:58:50.462417    2339 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/image-012000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/image-012000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/image-012000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:49:22:5b:da:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/image-012000/disk.qcow2
	I0911 03:58:50.504712    2339 main.go:141] libmachine: STDOUT: 
	I0911 03:58:50.504730    2339 main.go:141] libmachine: STDERR: 
	I0911 03:58:50.504733    2339 main.go:141] libmachine: Attempt 0
	I0911 03:58:50.504746    2339 main.go:141] libmachine: Searching for 8e:49:22:5b:da:de in /var/db/dhcpd_leases ...
	I0911 03:58:50.504817    2339 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0911 03:58:50.504835    2339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:62:99:da:56:de:73 ID:1,62:99:da:56:de:73 Lease:0x650043a6}
	I0911 03:58:50.504840    2339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:be:d8:6:ae:f2:7b ID:1,be:d8:6:ae:f2:7b Lease:0x64fef219}
	I0911 03:58:50.504844    2339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:8d:15:a0:6f:df ID:1,1a:8d:15:a0:6f:df Lease:0x65004356}
	I0911 03:58:52.506932    2339 main.go:141] libmachine: Attempt 1
	I0911 03:58:52.507059    2339 main.go:141] libmachine: Searching for 8e:49:22:5b:da:de in /var/db/dhcpd_leases ...
	I0911 03:58:52.507399    2339 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0911 03:58:52.507441    2339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:62:99:da:56:de:73 ID:1,62:99:da:56:de:73 Lease:0x650043a6}
	I0911 03:58:52.507468    2339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:be:d8:6:ae:f2:7b ID:1,be:d8:6:ae:f2:7b Lease:0x64fef219}
	I0911 03:58:52.507498    2339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:8d:15:a0:6f:df ID:1,1a:8d:15:a0:6f:df Lease:0x65004356}
	I0911 03:58:54.509606    2339 main.go:141] libmachine: Attempt 2
	I0911 03:58:54.509619    2339 main.go:141] libmachine: Searching for 8e:49:22:5b:da:de in /var/db/dhcpd_leases ...
	I0911 03:58:54.509752    2339 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0911 03:58:54.509771    2339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:62:99:da:56:de:73 ID:1,62:99:da:56:de:73 Lease:0x650043a6}
	I0911 03:58:54.509775    2339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:be:d8:6:ae:f2:7b ID:1,be:d8:6:ae:f2:7b Lease:0x64fef219}
	I0911 03:58:54.509779    2339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:8d:15:a0:6f:df ID:1,1a:8d:15:a0:6f:df Lease:0x65004356}
	I0911 03:58:56.511805    2339 main.go:141] libmachine: Attempt 3
	I0911 03:58:56.511829    2339 main.go:141] libmachine: Searching for 8e:49:22:5b:da:de in /var/db/dhcpd_leases ...
	I0911 03:58:56.511897    2339 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0911 03:58:56.511906    2339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:62:99:da:56:de:73 ID:1,62:99:da:56:de:73 Lease:0x650043a6}
	I0911 03:58:56.511910    2339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:be:d8:6:ae:f2:7b ID:1,be:d8:6:ae:f2:7b Lease:0x64fef219}
	I0911 03:58:56.511914    2339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:8d:15:a0:6f:df ID:1,1a:8d:15:a0:6f:df Lease:0x65004356}
	I0911 03:58:58.513891    2339 main.go:141] libmachine: Attempt 4
	I0911 03:58:58.513896    2339 main.go:141] libmachine: Searching for 8e:49:22:5b:da:de in /var/db/dhcpd_leases ...
	I0911 03:58:58.513932    2339 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0911 03:58:58.513937    2339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:62:99:da:56:de:73 ID:1,62:99:da:56:de:73 Lease:0x650043a6}
	I0911 03:58:58.513942    2339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:be:d8:6:ae:f2:7b ID:1,be:d8:6:ae:f2:7b Lease:0x64fef219}
	I0911 03:58:58.513946    2339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:8d:15:a0:6f:df ID:1,1a:8d:15:a0:6f:df Lease:0x65004356}
	I0911 03:59:00.515934    2339 main.go:141] libmachine: Attempt 5
	I0911 03:59:00.515943    2339 main.go:141] libmachine: Searching for 8e:49:22:5b:da:de in /var/db/dhcpd_leases ...
	I0911 03:59:00.516024    2339 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0911 03:59:00.516033    2339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:62:99:da:56:de:73 ID:1,62:99:da:56:de:73 Lease:0x650043a6}
	I0911 03:59:00.516037    2339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:be:d8:6:ae:f2:7b ID:1,be:d8:6:ae:f2:7b Lease:0x64fef219}
	I0911 03:59:00.516049    2339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:8d:15:a0:6f:df ID:1,1a:8d:15:a0:6f:df Lease:0x65004356}
	I0911 03:59:02.518086    2339 main.go:141] libmachine: Attempt 6
	I0911 03:59:02.518107    2339 main.go:141] libmachine: Searching for 8e:49:22:5b:da:de in /var/db/dhcpd_leases ...
	I0911 03:59:02.518322    2339 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0911 03:59:02.518343    2339 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8e:49:22:5b:da:de ID:1,8e:49:22:5b:da:de Lease:0x65004475}
	I0911 03:59:02.518349    2339 main.go:141] libmachine: Found match: 8e:49:22:5b:da:de
	I0911 03:59:02.518367    2339 main.go:141] libmachine: IP: 192.168.105.5
	I0911 03:59:02.518376    2339 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0911 03:59:04.538185    2339 machine.go:88] provisioning docker machine ...
	I0911 03:59:04.538257    2339 buildroot.go:166] provisioning hostname "image-012000"
	I0911 03:59:04.538487    2339 main.go:141] libmachine: Using SSH client type: native
	I0911 03:59:04.539469    2339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100aaa3b0] 0x100aace10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0911 03:59:04.539485    2339 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-012000 && echo "image-012000" | sudo tee /etc/hostname
	I0911 03:59:04.624068    2339 main.go:141] libmachine: SSH cmd err, output: <nil>: image-012000
	
	I0911 03:59:04.624204    2339 main.go:141] libmachine: Using SSH client type: native
	I0911 03:59:04.624712    2339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100aaa3b0] 0x100aace10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0911 03:59:04.624725    2339 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-012000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-012000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-012000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 03:59:04.691839    2339 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 03:59:04.691871    2339 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17223-1124/.minikube CaCertPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17223-1124/.minikube}
	I0911 03:59:04.691881    2339 buildroot.go:174] setting up certificates
	I0911 03:59:04.691893    2339 provision.go:83] configureAuth start
	I0911 03:59:04.691897    2339 provision.go:138] copyHostCerts
	I0911 03:59:04.692014    2339 exec_runner.go:144] found /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.pem, removing ...
	I0911 03:59:04.692020    2339 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.pem
	I0911 03:59:04.692205    2339 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.pem (1078 bytes)
	I0911 03:59:04.692513    2339 exec_runner.go:144] found /Users/jenkins/minikube-integration/17223-1124/.minikube/cert.pem, removing ...
	I0911 03:59:04.692516    2339 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17223-1124/.minikube/cert.pem
	I0911 03:59:04.692591    2339 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17223-1124/.minikube/cert.pem (1123 bytes)
	I0911 03:59:04.692732    2339 exec_runner.go:144] found /Users/jenkins/minikube-integration/17223-1124/.minikube/key.pem, removing ...
	I0911 03:59:04.692734    2339 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17223-1124/.minikube/key.pem
	I0911 03:59:04.692793    2339 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17223-1124/.minikube/key.pem (1679 bytes)
	I0911 03:59:04.692897    2339 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca-key.pem org=jenkins.image-012000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-012000]
	I0911 03:59:04.781713    2339 provision.go:172] copyRemoteCerts
	I0911 03:59:04.781743    2339 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 03:59:04.781748    2339 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/image-012000/id_rsa Username:docker}
	I0911 03:59:04.810363    2339 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0911 03:59:04.817471    2339 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0911 03:59:04.824435    2339 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 03:59:04.831198    2339 provision.go:86] duration metric: configureAuth took 139.3005ms
	I0911 03:59:04.831203    2339 buildroot.go:189] setting minikube options for container-runtime
	I0911 03:59:04.831306    2339 config.go:182] Loaded profile config "image-012000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 03:59:04.831342    2339 main.go:141] libmachine: Using SSH client type: native
	I0911 03:59:04.831556    2339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100aaa3b0] 0x100aace10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0911 03:59:04.831559    2339 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0911 03:59:04.886233    2339 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0911 03:59:04.886239    2339 buildroot.go:70] root file system type: tmpfs
	I0911 03:59:04.886307    2339 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0911 03:59:04.886352    2339 main.go:141] libmachine: Using SSH client type: native
	I0911 03:59:04.886598    2339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100aaa3b0] 0x100aace10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0911 03:59:04.886632    2339 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0911 03:59:04.945778    2339 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0911 03:59:04.945817    2339 main.go:141] libmachine: Using SSH client type: native
	I0911 03:59:04.946046    2339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100aaa3b0] 0x100aace10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0911 03:59:04.946053    2339 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0911 03:59:05.267102    2339 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0911 03:59:05.267110    2339 machine.go:91] provisioned docker machine in 728.928292ms
	I0911 03:59:05.267114    2339 client.go:171] LocalClient.Create took 15.239479917s
	I0911 03:59:05.267127    2339 start.go:167] duration metric: libmachine.API.Create for "image-012000" took 15.239520416s
	I0911 03:59:05.267129    2339 start.go:300] post-start starting for "image-012000" (driver="qemu2")
	I0911 03:59:05.267133    2339 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 03:59:05.267205    2339 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 03:59:05.267212    2339 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/image-012000/id_rsa Username:docker}
	I0911 03:59:05.297895    2339 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 03:59:05.299363    2339 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 03:59:05.299370    2339 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17223-1124/.minikube/addons for local assets ...
	I0911 03:59:05.299440    2339 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17223-1124/.minikube/files for local assets ...
	I0911 03:59:05.299539    2339 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/ssl/certs/15652.pem -> 15652.pem in /etc/ssl/certs
	I0911 03:59:05.299649    2339 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 03:59:05.302174    2339 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/ssl/certs/15652.pem --> /etc/ssl/certs/15652.pem (1708 bytes)
	I0911 03:59:05.308943    2339 start.go:303] post-start completed in 41.811167ms
	I0911 03:59:05.309300    2339 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/config.json ...
	I0911 03:59:05.309447    2339 start.go:128] duration metric: createHost completed in 15.309688708s
	I0911 03:59:05.309475    2339 main.go:141] libmachine: Using SSH client type: native
	I0911 03:59:05.309688    2339 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100aaa3b0] 0x100aace10 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0911 03:59:05.309690    2339 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 03:59:05.363287    2339 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694429945.427449877
	
	I0911 03:59:05.363291    2339 fix.go:206] guest clock: 1694429945.427449877
	I0911 03:59:05.363294    2339 fix.go:219] Guest: 2023-09-11 03:59:05.427449877 -0700 PDT Remote: 2023-09-11 03:59:05.30945 -0700 PDT m=+15.412173251 (delta=117.999877ms)
	I0911 03:59:05.363303    2339 fix.go:190] guest clock delta is within tolerance: 117.999877ms
	I0911 03:59:05.363305    2339 start.go:83] releasing machines lock for "image-012000", held for 15.363591834s
	I0911 03:59:05.363564    2339 ssh_runner.go:195] Run: cat /version.json
	I0911 03:59:05.363569    2339 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/image-012000/id_rsa Username:docker}
	I0911 03:59:05.363598    2339 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 03:59:05.363615    2339 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/image-012000/id_rsa Username:docker}
	I0911 03:59:05.430875    2339 ssh_runner.go:195] Run: systemctl --version
	I0911 03:59:05.432951    2339 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 03:59:05.434765    2339 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 03:59:05.434797    2339 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 03:59:05.439631    2339 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 03:59:05.439636    2339 start.go:466] detecting cgroup driver to use...
	I0911 03:59:05.439696    2339 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 03:59:05.444848    2339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0911 03:59:05.448405    2339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0911 03:59:05.451445    2339 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0911 03:59:05.451479    2339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0911 03:59:05.454374    2339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0911 03:59:05.457548    2339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0911 03:59:05.460861    2339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0911 03:59:05.464532    2339 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 03:59:05.467731    2339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0911 03:59:05.470547    2339 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 03:59:05.473565    2339 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 03:59:05.476737    2339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:59:05.540740    2339 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0911 03:59:05.547110    2339 start.go:466] detecting cgroup driver to use...
	I0911 03:59:05.547165    2339 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0911 03:59:05.554244    2339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 03:59:05.558932    2339 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 03:59:05.565587    2339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 03:59:05.570368    2339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0911 03:59:05.575426    2339 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0911 03:59:05.623930    2339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0911 03:59:05.629017    2339 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 03:59:05.634700    2339 ssh_runner.go:195] Run: which cri-dockerd
	I0911 03:59:05.636026    2339 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0911 03:59:05.638547    2339 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0911 03:59:05.643591    2339 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0911 03:59:05.712884    2339 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0911 03:59:05.775567    2339 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0911 03:59:05.775576    2339 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0911 03:59:05.781073    2339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:59:05.843849    2339 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0911 03:59:07.005851    2339 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.162018375s)
	I0911 03:59:07.005913    2339 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0911 03:59:07.073036    2339 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0911 03:59:07.135347    2339 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0911 03:59:07.216567    2339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:59:07.280288    2339 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0911 03:59:07.286714    2339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:59:07.348417    2339 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0911 03:59:07.371677    2339 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0911 03:59:07.371771    2339 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0911 03:59:07.374549    2339 start.go:534] Will wait 60s for crictl version
	I0911 03:59:07.374595    2339 ssh_runner.go:195] Run: which crictl
	I0911 03:59:07.375989    2339 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 03:59:07.391804    2339 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.5
	RuntimeApiVersion:  v1alpha2
	I0911 03:59:07.391869    2339 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0911 03:59:07.401704    2339 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0911 03:59:07.418660    2339 out.go:204] * Preparing Kubernetes v1.28.1 on Docker 24.0.5 ...
	I0911 03:59:07.418790    2339 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0911 03:59:07.420189    2339 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 03:59:07.423732    2339 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 03:59:07.423776    2339 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0911 03:59:07.428884    2339 docker.go:636] Got preloaded images: 
	I0911 03:59:07.428888    2339 docker.go:642] registry.k8s.io/kube-apiserver:v1.28.1 wasn't preloaded
	I0911 03:59:07.428943    2339 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0911 03:59:07.432180    2339 ssh_runner.go:195] Run: which lz4
	I0911 03:59:07.433489    2339 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 03:59:07.434762    2339 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 03:59:07.434772    2339 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (356902558 bytes)
	I0911 03:59:08.749357    2339 docker.go:600] Took 1.315938 seconds to copy over tarball
	I0911 03:59:08.749408    2339 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 03:59:09.765282    2339 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.015883959s)
	I0911 03:59:09.765292    2339 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 03:59:09.781594    2339 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0911 03:59:09.785451    2339 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0911 03:59:09.790730    2339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:59:09.851619    2339 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0911 03:59:11.335047    2339 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.483452666s)
	I0911 03:59:11.335145    2339 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0911 03:59:11.340797    2339 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.1
	registry.k8s.io/kube-scheduler:v1.28.1
	registry.k8s.io/kube-proxy:v1.28.1
	registry.k8s.io/kube-controller-manager:v1.28.1
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0911 03:59:11.340803    2339 cache_images.go:84] Images are preloaded, skipping loading
	I0911 03:59:11.340864    2339 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0911 03:59:11.348711    2339 cni.go:84] Creating CNI manager for ""
	I0911 03:59:11.348719    2339 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 03:59:11.348734    2339 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 03:59:11.348744    2339 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-012000 NodeName:image-012000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 03:59:11.348836    2339 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-012000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 03:59:11.348880    2339 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-012000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:image-012000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 03:59:11.348933    2339 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 03:59:11.352461    2339 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 03:59:11.352487    2339 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 03:59:11.355192    2339 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0911 03:59:11.360067    2339 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 03:59:11.365048    2339 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0911 03:59:11.370441    2339 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0911 03:59:11.371725    2339 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 03:59:11.374971    2339 certs.go:56] Setting up /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000 for IP: 192.168.105.5
	I0911 03:59:11.374988    2339 certs.go:190] acquiring lock for shared ca certs: {Name:mk38c09806021c18792511eb48bf232ccb80ec29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:59:11.375160    2339 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.key
	I0911 03:59:11.375197    2339 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17223-1124/.minikube/proxy-client-ca.key
	I0911 03:59:11.375219    2339 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/client.key
	I0911 03:59:11.375223    2339 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/client.crt with IP's: []
	I0911 03:59:11.410864    2339 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/client.crt ...
	I0911 03:59:11.410867    2339 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/client.crt: {Name:mk091b41ace723892d0ef42f1906d854683eeb03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:59:11.411089    2339 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/client.key ...
	I0911 03:59:11.411090    2339 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/client.key: {Name:mk00e7d2583ad207b04b807fdf53a2a92ea62823 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:59:11.411205    2339 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/apiserver.key.e69b33ca
	I0911 03:59:11.411210    2339 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I0911 03:59:11.697728    2339 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/apiserver.crt.e69b33ca ...
	I0911 03:59:11.697731    2339 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/apiserver.crt.e69b33ca: {Name:mk0382d94e017c087ba4de077ba6768a36ccf364 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:59:11.697937    2339 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/apiserver.key.e69b33ca ...
	I0911 03:59:11.697939    2339 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/apiserver.key.e69b33ca: {Name:mk52d89c767c1105eeb56237d4d193dbcbb40fa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:59:11.698047    2339 certs.go:337] copying /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/apiserver.crt
	I0911 03:59:11.698364    2339 certs.go:341] copying /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/apiserver.key
	I0911 03:59:11.698672    2339 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/proxy-client.key
	I0911 03:59:11.698733    2339 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/proxy-client.crt with IP's: []
	I0911 03:59:11.823324    2339 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/proxy-client.crt ...
	I0911 03:59:11.823329    2339 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/proxy-client.crt: {Name:mk96e61e18342b2268c9d2c0facc96be6a12e656 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:59:11.823587    2339 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/proxy-client.key ...
	I0911 03:59:11.823589    2339 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/proxy-client.key: {Name:mk0b1abfa567bbdb6afd4f85a1e2b32251c82220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:59:11.823845    2339 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/1565.pem (1338 bytes)
	W0911 03:59:11.823876    2339 certs.go:433] ignoring /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/1565_empty.pem, impossibly tiny 0 bytes
	I0911 03:59:11.823881    2339 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 03:59:11.823900    2339 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem (1078 bytes)
	I0911 03:59:11.823917    2339 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem (1123 bytes)
	I0911 03:59:11.823934    2339 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/key.pem (1679 bytes)
	I0911 03:59:11.823975    2339 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/ssl/certs/15652.pem (1708 bytes)
	I0911 03:59:11.824328    2339 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 03:59:11.832347    2339 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 03:59:11.839058    2339 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 03:59:11.845671    2339 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/image-012000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 03:59:11.852668    2339 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 03:59:11.859394    2339 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 03:59:11.865736    2339 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 03:59:11.872783    2339 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0911 03:59:11.879807    2339 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/ssl/certs/15652.pem --> /usr/share/ca-certificates/15652.pem (1708 bytes)
	I0911 03:59:11.886400    2339 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 03:59:11.893324    2339 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/1565.pem --> /usr/share/ca-certificates/1565.pem (1338 bytes)
	I0911 03:59:11.900340    2339 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 03:59:11.905790    2339 ssh_runner.go:195] Run: openssl version
	I0911 03:59:11.907982    2339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1565.pem && ln -fs /usr/share/ca-certificates/1565.pem /etc/ssl/certs/1565.pem"
	I0911 03:59:11.910786    2339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1565.pem
	I0911 03:59:11.912132    2339 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 10:55 /usr/share/ca-certificates/1565.pem
	I0911 03:59:11.912152    2339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1565.pem
	I0911 03:59:11.913948    2339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1565.pem /etc/ssl/certs/51391683.0"
	I0911 03:59:11.917090    2339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15652.pem && ln -fs /usr/share/ca-certificates/15652.pem /etc/ssl/certs/15652.pem"
	I0911 03:59:11.920151    2339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15652.pem
	I0911 03:59:11.921563    2339 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 10:55 /usr/share/ca-certificates/15652.pem
	I0911 03:59:11.921579    2339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15652.pem
	I0911 03:59:11.923418    2339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15652.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 03:59:11.926112    2339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 03:59:11.929396    2339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 03:59:11.930941    2339 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:54 /usr/share/ca-certificates/minikubeCA.pem
	I0911 03:59:11.930960    2339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 03:59:11.932682    2339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 03:59:11.935489    2339 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 03:59:11.936805    2339 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 03:59:11.936833    2339 kubeadm.go:404] StartCluster: {Name:image-012000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.1 ClusterName:image-012000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 03:59:11.936918    2339 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0911 03:59:11.942268    2339 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 03:59:11.945535    2339 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 03:59:11.948705    2339 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 03:59:11.951805    2339 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 03:59:11.951816    2339 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0911 03:59:11.973784    2339 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 03:59:11.973808    2339 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 03:59:12.024579    2339 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 03:59:12.024624    2339 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 03:59:12.024663    2339 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 03:59:12.085304    2339 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 03:59:12.091525    2339 out.go:204]   - Generating certificates and keys ...
	I0911 03:59:12.091591    2339 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 03:59:12.091620    2339 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 03:59:12.262120    2339 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0911 03:59:12.417767    2339 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0911 03:59:12.479627    2339 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0911 03:59:12.734336    2339 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0911 03:59:12.921825    2339 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0911 03:59:12.921882    2339 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-012000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0911 03:59:13.145900    2339 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0911 03:59:13.145966    2339 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-012000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0911 03:59:13.244287    2339 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0911 03:59:13.364389    2339 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0911 03:59:13.415499    2339 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0911 03:59:13.415526    2339 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 03:59:13.614608    2339 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 03:59:13.664102    2339 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 03:59:13.721383    2339 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 03:59:13.891355    2339 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 03:59:13.891739    2339 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 03:59:13.893478    2339 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 03:59:13.901800    2339 out.go:204]   - Booting up control plane ...
	I0911 03:59:13.901868    2339 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 03:59:13.901905    2339 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 03:59:13.901940    2339 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 03:59:13.901986    2339 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 03:59:13.902037    2339 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 03:59:13.902057    2339 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 03:59:13.971966    2339 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 03:59:17.975749    2339 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.004023 seconds
	I0911 03:59:17.975813    2339 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 03:59:17.981520    2339 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 03:59:18.491198    2339 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 03:59:18.491301    2339 kubeadm.go:322] [mark-control-plane] Marking the node image-012000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 03:59:18.996318    2339 kubeadm.go:322] [bootstrap-token] Using token: x9r0j0.0wi3rl93lerukxzk
	I0911 03:59:19.007864    2339 out.go:204]   - Configuring RBAC rules ...
	I0911 03:59:19.007914    2339 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 03:59:19.008960    2339 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 03:59:19.011785    2339 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 03:59:19.012741    2339 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 03:59:19.014530    2339 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 03:59:19.015626    2339 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 03:59:19.019718    2339 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 03:59:19.188744    2339 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 03:59:19.411193    2339 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 03:59:19.411810    2339 kubeadm.go:322] 
	I0911 03:59:19.411841    2339 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 03:59:19.411844    2339 kubeadm.go:322] 
	I0911 03:59:19.411879    2339 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 03:59:19.411880    2339 kubeadm.go:322] 
	I0911 03:59:19.411891    2339 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 03:59:19.411916    2339 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 03:59:19.411939    2339 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 03:59:19.411941    2339 kubeadm.go:322] 
	I0911 03:59:19.411968    2339 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0911 03:59:19.411970    2339 kubeadm.go:322] 
	I0911 03:59:19.412043    2339 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 03:59:19.412046    2339 kubeadm.go:322] 
	I0911 03:59:19.412072    2339 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 03:59:19.412112    2339 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 03:59:19.412151    2339 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 03:59:19.412153    2339 kubeadm.go:322] 
	I0911 03:59:19.412197    2339 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 03:59:19.412229    2339 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 03:59:19.412230    2339 kubeadm.go:322] 
	I0911 03:59:19.412274    2339 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token x9r0j0.0wi3rl93lerukxzk \
	I0911 03:59:19.412324    2339 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:77399ad9541b4667fda28bf9bf29366ef8ebe6fdc39d6e893157dd935cb9f38b \
	I0911 03:59:19.412338    2339 kubeadm.go:322] 	--control-plane 
	I0911 03:59:19.412340    2339 kubeadm.go:322] 
	I0911 03:59:19.412380    2339 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 03:59:19.412382    2339 kubeadm.go:322] 
	I0911 03:59:19.412431    2339 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token x9r0j0.0wi3rl93lerukxzk \
	I0911 03:59:19.412485    2339 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:77399ad9541b4667fda28bf9bf29366ef8ebe6fdc39d6e893157dd935cb9f38b 
	I0911 03:59:19.412542    2339 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 03:59:19.412548    2339 cni.go:84] Creating CNI manager for ""
	I0911 03:59:19.412554    2339 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 03:59:19.419851    2339 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 03:59:19.423851    2339 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 03:59:19.427006    2339 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 03:59:19.431796    2339 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 03:59:19.431847    2339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:59:19.431854    2339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9 minikube.k8s.io/name=image-012000 minikube.k8s.io/updated_at=2023_09_11T03_59_19_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 03:59:19.487979    2339 kubeadm.go:1081] duration metric: took 56.167625ms to wait for elevateKubeSystemPrivileges.
	I0911 03:59:19.487983    2339 ops.go:34] apiserver oom_adj: -16
	I0911 03:59:19.494665    2339 kubeadm.go:406] StartCluster complete in 7.55802275s
	I0911 03:59:19.494678    2339 settings.go:142] acquiring lock: {Name:mk1469232b3abbdcc69ed77e286fb2789adb44fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:59:19.494762    2339 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 03:59:19.495088    2339 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/kubeconfig: {Name:mk8b43c711db1489632c69fe978a061a5dcf6734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:59:19.495252    2339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 03:59:19.495274    2339 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 03:59:19.495305    2339 addons.go:69] Setting storage-provisioner=true in profile "image-012000"
	I0911 03:59:19.495307    2339 addons.go:69] Setting default-storageclass=true in profile "image-012000"
	I0911 03:59:19.495311    2339 addons.go:231] Setting addon storage-provisioner=true in "image-012000"
	I0911 03:59:19.495312    2339 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-012000"
	I0911 03:59:19.495330    2339 host.go:66] Checking if "image-012000" exists ...
	I0911 03:59:19.495362    2339 config.go:182] Loaded profile config "image-012000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 03:59:19.499953    2339 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 03:59:19.502093    2339 addons.go:231] Setting addon default-storageclass=true in "image-012000"
	I0911 03:59:19.503989    2339 host.go:66] Checking if "image-012000" exists ...
	I0911 03:59:19.504017    2339 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 03:59:19.504021    2339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 03:59:19.504029    2339 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/image-012000/id_rsa Username:docker}
	I0911 03:59:19.504741    2339 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 03:59:19.504743    2339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 03:59:19.504746    2339 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/image-012000/id_rsa Username:docker}
	I0911 03:59:19.505665    2339 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-012000" context rescaled to 1 replicas
	I0911 03:59:19.505679    2339 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 03:59:19.513949    2339 out.go:177] * Verifying Kubernetes components...
	I0911 03:59:19.517999    2339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 03:59:19.543591    2339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0911 03:59:19.543927    2339 api_server.go:52] waiting for apiserver process to appear ...
	I0911 03:59:19.543963    2339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 03:59:19.545598    2339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 03:59:19.549435    2339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 03:59:19.983718    2339 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0911 03:59:19.983732    2339 api_server.go:72] duration metric: took 478.0565ms to wait for apiserver process to appear ...
	I0911 03:59:19.983736    2339 api_server.go:88] waiting for apiserver healthz status ...
	I0911 03:59:19.983744    2339 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0911 03:59:19.987638    2339 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0911 03:59:19.988295    2339 api_server.go:141] control plane version: v1.28.1
	I0911 03:59:19.988300    2339 api_server.go:131] duration metric: took 4.5615ms to wait for apiserver health ...
	I0911 03:59:19.988305    2339 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 03:59:19.990966    2339 system_pods.go:59] 4 kube-system pods found
	I0911 03:59:19.990971    2339 system_pods.go:61] "etcd-image-012000" [5db9f9d4-8e25-4200-ba1a-b6563c27146d] Pending
	I0911 03:59:19.990973    2339 system_pods.go:61] "kube-apiserver-image-012000" [873cc0de-a0fb-4043-90b1-09d640f443c4] Pending
	I0911 03:59:19.990975    2339 system_pods.go:61] "kube-controller-manager-image-012000" [2c856d3c-f858-49b9-ab50-0ff08264e79d] Pending
	I0911 03:59:19.990977    2339 system_pods.go:61] "kube-scheduler-image-012000" [f8a897cd-6e3e-493c-9b57-e24a5a3d2212] Pending
	I0911 03:59:19.990979    2339 system_pods.go:74] duration metric: took 2.672459ms to wait for pod list to return data ...
	I0911 03:59:19.990982    2339 kubeadm.go:581] duration metric: took 485.308083ms to wait for : map[apiserver:true system_pods:true] ...
	I0911 03:59:19.990987    2339 node_conditions.go:102] verifying NodePressure condition ...
	I0911 03:59:19.992241    2339 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0911 03:59:19.992247    2339 node_conditions.go:123] node cpu capacity is 2
	I0911 03:59:19.992251    2339 node_conditions.go:105] duration metric: took 1.26325ms to run NodePressure ...
	I0911 03:59:19.992255    2339 start.go:228] waiting for startup goroutines ...
	I0911 03:59:20.041444    2339 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0911 03:59:20.049441    2339 addons.go:502] enable addons completed in 554.181375ms: enabled=[storage-provisioner default-storageclass]
	I0911 03:59:20.049453    2339 start.go:233] waiting for cluster config update ...
	I0911 03:59:20.049457    2339 start.go:242] writing updated cluster config ...
	I0911 03:59:20.049694    2339 ssh_runner.go:195] Run: rm -f paused
	I0911 03:59:20.077653    2339 start.go:600] kubectl: 1.27.2, cluster: 1.28.1 (minor skew: 1)
	I0911 03:59:20.081456    2339 out.go:177] * Done! kubectl is now configured to use "image-012000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-11 10:59:01 UTC, ends at Mon 2023-09-11 10:59:21 UTC. --
	Sep 11 10:59:15 image-012000 cri-dockerd[994]: time="2023-09-11T10:59:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/17af9fb052c7efd62a17071615aba9c916787d7f66ed0fe158778b089987aa05/resolv.conf as [nameserver 192.168.105.1]"
	Sep 11 10:59:15 image-012000 cri-dockerd[994]: time="2023-09-11T10:59:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/702bf8214a87870c1d46e21c9b91a2ff7d3a329659df71949927e65b3f015931/resolv.conf as [nameserver 192.168.105.1]"
	Sep 11 10:59:15 image-012000 dockerd[1101]: time="2023-09-11T10:59:15.055171506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 11 10:59:15 image-012000 dockerd[1101]: time="2023-09-11T10:59:15.055225256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 10:59:15 image-012000 dockerd[1101]: time="2023-09-11T10:59:15.055323840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 11 10:59:15 image-012000 dockerd[1101]: time="2023-09-11T10:59:15.055337590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 10:59:15 image-012000 dockerd[1101]: time="2023-09-11T10:59:15.059384465Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 11 10:59:15 image-012000 dockerd[1101]: time="2023-09-11T10:59:15.059433673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 10:59:15 image-012000 dockerd[1101]: time="2023-09-11T10:59:15.059449506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 11 10:59:15 image-012000 dockerd[1101]: time="2023-09-11T10:59:15.059460173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 10:59:15 image-012000 dockerd[1101]: time="2023-09-11T10:59:15.090268881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 11 10:59:15 image-012000 dockerd[1101]: time="2023-09-11T10:59:15.090331923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 10:59:15 image-012000 dockerd[1101]: time="2023-09-11T10:59:15.090346631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 11 10:59:15 image-012000 dockerd[1101]: time="2023-09-11T10:59:15.090357340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 10:59:21 image-012000 dockerd[1095]: time="2023-09-11T10:59:21.353857884Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 11 10:59:21 image-012000 dockerd[1095]: time="2023-09-11T10:59:21.468851384Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 11 10:59:21 image-012000 dockerd[1095]: time="2023-09-11T10:59:21.484054468Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Sep 11 10:59:21 image-012000 dockerd[1101]: time="2023-09-11T10:59:21.513615718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 11 10:59:21 image-012000 dockerd[1101]: time="2023-09-11T10:59:21.513836676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 10:59:21 image-012000 dockerd[1101]: time="2023-09-11T10:59:21.513865510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 11 10:59:21 image-012000 dockerd[1101]: time="2023-09-11T10:59:21.513870301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 10:59:21 image-012000 dockerd[1095]: time="2023-09-11T10:59:21.651190718Z" level=info msg="ignoring event" container=6589292ad3e9a2fc41123420f0ec3ff197645226158ecdadbbeec3d6830ad3c2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 11 10:59:21 image-012000 dockerd[1101]: time="2023-09-11T10:59:21.651363718Z" level=info msg="shim disconnected" id=6589292ad3e9a2fc41123420f0ec3ff197645226158ecdadbbeec3d6830ad3c2 namespace=moby
	Sep 11 10:59:21 image-012000 dockerd[1101]: time="2023-09-11T10:59:21.651392135Z" level=warning msg="cleaning up after shim disconnected" id=6589292ad3e9a2fc41123420f0ec3ff197645226158ecdadbbeec3d6830ad3c2 namespace=moby
	Sep 11 10:59:21 image-012000 dockerd[1101]: time="2023-09-11T10:59:21.651396551Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	3d891f07a1232       b4a5a57e99492       7 seconds ago       Running             kube-scheduler            0                   702bf8214a878
	057f1c6acbe4d       9cdd6470f48c8       7 seconds ago       Running             etcd                      0                   17af9fb052c7e
	00037b3affb27       b29fb62480892       8 seconds ago       Running             kube-apiserver            0                   a444c1b9c7d5a
	a6c321cdfa665       8b6e1980b7584       8 seconds ago       Running             kube-controller-manager   0                   06b3cb794bbac
	
	* 
	* ==> describe nodes <==
	* Name:               image-012000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-012000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=image-012000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T03_59_19_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 10:59:16 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-012000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 10:59:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 10:59:19 +0000   Mon, 11 Sep 2023 10:59:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 10:59:19 +0000   Mon, 11 Sep 2023 10:59:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 10:59:19 +0000   Mon, 11 Sep 2023 10:59:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 11 Sep 2023 10:59:19 +0000   Mon, 11 Sep 2023 10:59:15 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-012000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 4b77913a3c6c4a088a37e09597779275
	  System UUID:                4b77913a3c6c4a088a37e09597779275
	  Boot ID:                    2c34014d-6ab6-4027-bf61-4a208aad2d9b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.5
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-012000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3s
	  kube-system                 kube-apiserver-image-012000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-controller-manager-image-012000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-image-012000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 3s    kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s    kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s    kubelet  Node image-012000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s    kubelet  Node image-012000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s    kubelet  Node image-012000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [Sep11 10:58] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.665359] EINJ: EINJ table not found.
	[Sep11 10:59] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043132] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000793] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.037970] systemd-fstab-generator[476]: Ignoring "noauto" for root device
	[  +0.070882] systemd-fstab-generator[487]: Ignoring "noauto" for root device
	[  +0.404136] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.174153] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[  +0.062489] systemd-fstab-generator[712]: Ignoring "noauto" for root device
	[  +0.068728] systemd-fstab-generator[725]: Ignoring "noauto" for root device
	[  +1.224191] systemd-fstab-generator[914]: Ignoring "noauto" for root device
	[  +0.065660] systemd-fstab-generator[925]: Ignoring "noauto" for root device
	[  +0.082567] systemd-fstab-generator[936]: Ignoring "noauto" for root device
	[  +0.064204] systemd-fstab-generator[947]: Ignoring "noauto" for root device
	[  +0.064580] systemd-fstab-generator[981]: Ignoring "noauto" for root device
	[  +2.504772] systemd-fstab-generator[1088]: Ignoring "noauto" for root device
	[  +1.462954] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.652330] systemd-fstab-generator[1419]: Ignoring "noauto" for root device
	[  +5.118932] systemd-fstab-generator[2294]: Ignoring "noauto" for root device
	[  +2.384009] kauditd_printk_skb: 41 callbacks suppressed
	
	* 
	* ==> etcd [057f1c6acbe4] <==
	* {"level":"info","ts":"2023-09-11T10:59:15.209538Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T10:59:15.20996Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T10:59:15.20998Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T10:59:15.209593Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-09-11T10:59:15.210037Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-09-11T10:59:15.209919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2023-09-11T10:59:15.210152Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-09-11T10:59:16.004602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-11T10:59:16.004674Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-11T10:59:16.004698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-09-11T10:59:16.004716Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-09-11T10:59:16.004895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-09-11T10:59:16.004943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-09-11T10:59:16.005049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-09-11T10:59:16.006604Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-012000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-11T10:59:16.006611Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T10:59:16.008078Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-11T10:59:16.006669Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T10:59:16.008575Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-11T10:59:16.008604Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-11T10:59:16.006722Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T10:59:16.013322Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2023-09-11T10:59:16.013387Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T10:59:16.013458Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T10:59:16.013482Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  10:59:22 up 0 min,  0 users,  load average: 1.11, 0.25, 0.08
	Linux image-012000 5.10.57 #1 SMP PREEMPT Thu Aug 24 12:01:08 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [00037b3affb2] <==
	* I0911 10:59:16.642608       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0911 10:59:16.642646       1 shared_informer.go:318] Caches are synced for configmaps
	I0911 10:59:16.643153       1 controller.go:624] quota admission added evaluator for: namespaces
	I0911 10:59:16.643207       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0911 10:59:16.643257       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0911 10:59:16.653340       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0911 10:59:16.658712       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0911 10:59:16.658833       1 aggregator.go:166] initial CRD sync complete...
	I0911 10:59:16.658965       1 autoregister_controller.go:141] Starting autoregister controller
	I0911 10:59:16.658977       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0911 10:59:16.659030       1 cache.go:39] Caches are synced for autoregister controller
	I0911 10:59:16.722132       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0911 10:59:17.544642       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0911 10:59:17.546407       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0911 10:59:17.546416       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0911 10:59:17.668844       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0911 10:59:17.679525       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0911 10:59:17.748011       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0911 10:59:17.750701       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0911 10:59:17.751071       1 controller.go:624] quota admission added evaluator for: endpoints
	I0911 10:59:17.752209       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0911 10:59:18.566858       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0911 10:59:19.249141       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0911 10:59:19.252917       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0911 10:59:19.256646       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [a6c321cdfa66] <==
	* I0911 10:59:15.246572       1 serving.go:348] Generated self-signed cert in-memory
	I0911 10:59:15.510628       1 controllermanager.go:189] "Starting" version="v1.28.1"
	I0911 10:59:15.510661       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 10:59:15.518021       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0911 10:59:15.518111       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0911 10:59:15.518121       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0911 10:59:15.518127       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0911 10:59:18.563572       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0911 10:59:18.567803       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0911 10:59:18.567887       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0911 10:59:18.567893       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0911 10:59:18.570504       1 controllermanager.go:642] "Started controller" controller="pod-garbage-collector-controller"
	I0911 10:59:18.570571       1 gc_controller.go:103] "Starting GC controller"
	I0911 10:59:18.570575       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0911 10:59:18.572891       1 controllermanager.go:642] "Started controller" controller="daemonset-controller"
	I0911 10:59:18.572949       1 daemon_controller.go:291] "Starting daemon sets controller"
	I0911 10:59:18.572954       1 shared_informer.go:311] Waiting for caches to sync for daemon sets
	I0911 10:59:18.575432       1 controllermanager.go:642] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0911 10:59:18.575480       1 certificate_controller.go:115] "Starting certificate controller" name="csrapproving"
	I0911 10:59:18.575486       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
	I0911 10:59:18.577766       1 controllermanager.go:642] "Started controller" controller="bootstrap-signer-controller"
	I0911 10:59:18.577837       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0911 10:59:18.664465       1 shared_informer.go:318] Caches are synced for tokens
	
	* 
	* ==> kube-scheduler [3d891f07a123] <==
	* W0911 10:59:16.630051       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0911 10:59:16.630077       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0911 10:59:16.630061       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0911 10:59:16.630113       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0911 10:59:16.630119       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0911 10:59:16.630130       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0911 10:59:16.630134       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0911 10:59:16.630147       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0911 10:59:16.630073       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0911 10:59:16.630247       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0911 10:59:16.630150       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0911 10:59:16.630095       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0911 10:59:16.630291       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0911 10:59:16.630183       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0911 10:59:16.630222       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0911 10:59:16.630299       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0911 10:59:16.630236       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0911 10:59:16.630306       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0911 10:59:16.630035       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0911 10:59:16.630311       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0911 10:59:17.534644       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0911 10:59:17.534665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0911 10:59:17.553694       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0911 10:59:17.553711       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0911 10:59:18.128680       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-11 10:59:01 UTC, ends at Mon 2023-09-11 10:59:22 UTC. --
	Sep 11 10:59:19 image-012000 kubelet[2313]: I0911 10:59:19.400413    2313 kubelet_node_status.go:70] "Attempting to register node" node="image-012000"
	Sep 11 10:59:19 image-012000 kubelet[2313]: I0911 10:59:19.406354    2313 kubelet_node_status.go:108] "Node was previously registered" node="image-012000"
	Sep 11 10:59:19 image-012000 kubelet[2313]: I0911 10:59:19.406402    2313 kubelet_node_status.go:73] "Successfully registered node" node="image-012000"
	Sep 11 10:59:19 image-012000 kubelet[2313]: I0911 10:59:19.406540    2313 topology_manager.go:215] "Topology Admit Handler" podUID="082e8c8b3edcee1f1d70fcb333695659" podNamespace="kube-system" podName="etcd-image-012000"
	Sep 11 10:59:19 image-012000 kubelet[2313]: I0911 10:59:19.406588    2313 topology_manager.go:215] "Topology Admit Handler" podUID="de7a399272cd3eaddcdac1c7d1a8ace9" podNamespace="kube-system" podName="kube-apiserver-image-012000"
	Sep 11 10:59:19 image-012000 kubelet[2313]: I0911 10:59:19.406611    2313 topology_manager.go:215] "Topology Admit Handler" podUID="e088e7c944c9593af6d4038da4bc4120" podNamespace="kube-system" podName="kube-controller-manager-image-012000"
	Sep 11 10:59:19 image-012000 kubelet[2313]: I0911 10:59:19.406633    2313 topology_manager.go:215] "Topology Admit Handler" podUID="67fa90adfba02a6d22fee6127960a64d" podNamespace="kube-system" podName="kube-scheduler-image-012000"
	Sep 11 10:59:19 image-012000 kubelet[2313]: I0911 10:59:19.499177    2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/de7a399272cd3eaddcdac1c7d1a8ace9-usr-share-ca-certificates\") pod \"kube-apiserver-image-012000\" (UID: \"de7a399272cd3eaddcdac1c7d1a8ace9\") " pod="kube-system/kube-apiserver-image-012000"
	Sep 11 10:59:19 image-012000 kubelet[2313]: I0911 10:59:19.499216    2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e088e7c944c9593af6d4038da4bc4120-ca-certs\") pod \"kube-controller-manager-image-012000\" (UID: \"e088e7c944c9593af6d4038da4bc4120\") " pod="kube-system/kube-controller-manager-image-012000"
	Sep 11 10:59:19 image-012000 kubelet[2313]: I0911 10:59:19.499286    2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e088e7c944c9593af6d4038da4bc4120-flexvolume-dir\") pod \"kube-controller-manager-image-012000\" (UID: \"e088e7c944c9593af6d4038da4bc4120\") " pod="kube-system/kube-controller-manager-image-012000"
	Sep 11 10:59:19 image-012000 kubelet[2313]: I0911 10:59:19.499300    2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e088e7c944c9593af6d4038da4bc4120-kubeconfig\") pod \"kube-controller-manager-image-012000\" (UID: \"e088e7c944c9593af6d4038da4bc4120\") " pod="kube-system/kube-controller-manager-image-012000"
	Sep 11 10:59:19 image-012000 kubelet[2313]: I0911 10:59:19.499322    2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e088e7c944c9593af6d4038da4bc4120-usr-share-ca-certificates\") pod \"kube-controller-manager-image-012000\" (UID: \"e088e7c944c9593af6d4038da4bc4120\") " pod="kube-system/kube-controller-manager-image-012000"
	Sep 11 10:59:19 image-012000 kubelet[2313]: I0911 10:59:19.499369    2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67fa90adfba02a6d22fee6127960a64d-kubeconfig\") pod \"kube-scheduler-image-012000\" (UID: \"67fa90adfba02a6d22fee6127960a64d\") " pod="kube-system/kube-scheduler-image-012000"
	Sep 11 10:59:19 image-012000 kubelet[2313]: I0911 10:59:19.499399    2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/082e8c8b3edcee1f1d70fcb333695659-etcd-certs\") pod \"etcd-image-012000\" (UID: \"082e8c8b3edcee1f1d70fcb333695659\") " pod="kube-system/etcd-image-012000"
	Sep 11 10:59:19 image-012000 kubelet[2313]: I0911 10:59:19.499440    2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/de7a399272cd3eaddcdac1c7d1a8ace9-k8s-certs\") pod \"kube-apiserver-image-012000\" (UID: \"de7a399272cd3eaddcdac1c7d1a8ace9\") " pod="kube-system/kube-apiserver-image-012000"
	Sep 11 10:59:19 image-012000 kubelet[2313]: I0911 10:59:19.499468    2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/082e8c8b3edcee1f1d70fcb333695659-etcd-data\") pod \"etcd-image-012000\" (UID: \"082e8c8b3edcee1f1d70fcb333695659\") " pod="kube-system/etcd-image-012000"
	Sep 11 10:59:19 image-012000 kubelet[2313]: I0911 10:59:19.499530    2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/de7a399272cd3eaddcdac1c7d1a8ace9-ca-certs\") pod \"kube-apiserver-image-012000\" (UID: \"de7a399272cd3eaddcdac1c7d1a8ace9\") " pod="kube-system/kube-apiserver-image-012000"
	Sep 11 10:59:19 image-012000 kubelet[2313]: I0911 10:59:19.499551    2313 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e088e7c944c9593af6d4038da4bc4120-k8s-certs\") pod \"kube-controller-manager-image-012000\" (UID: \"e088e7c944c9593af6d4038da4bc4120\") " pod="kube-system/kube-controller-manager-image-012000"
	Sep 11 10:59:20 image-012000 kubelet[2313]: I0911 10:59:20.286126    2313 apiserver.go:52] "Watching apiserver"
	Sep 11 10:59:20 image-012000 kubelet[2313]: I0911 10:59:20.295483    2313 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 11 10:59:20 image-012000 kubelet[2313]: E0911 10:59:20.357378    2313 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-012000\" already exists" pod="kube-system/kube-apiserver-image-012000"
	Sep 11 10:59:20 image-012000 kubelet[2313]: I0911 10:59:20.359286    2313 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-012000" podStartSLOduration=1.359258342 podCreationTimestamp="2023-09-11 10:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-11 10:59:20.359205467 +0000 UTC m=+1.120012168" watchObservedRunningTime="2023-09-11 10:59:20.359258342 +0000 UTC m=+1.120065043"
	Sep 11 10:59:20 image-012000 kubelet[2313]: I0911 10:59:20.359376    2313 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-012000" podStartSLOduration=1.3593666340000001 podCreationTimestamp="2023-09-11 10:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-11 10:59:20.355651926 +0000 UTC m=+1.116458668" watchObservedRunningTime="2023-09-11 10:59:20.359366634 +0000 UTC m=+1.120173377"
	Sep 11 10:59:20 image-012000 kubelet[2313]: I0911 10:59:20.367407    2313 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-012000" podStartSLOduration=1.367385134 podCreationTimestamp="2023-09-11 10:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-11 10:59:20.362881384 +0000 UTC m=+1.123688127" watchObservedRunningTime="2023-09-11 10:59:20.367385134 +0000 UTC m=+1.128191877"
	Sep 11 10:59:20 image-012000 kubelet[2313]: I0911 10:59:20.371674    2313 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-012000" podStartSLOduration=1.371659092 podCreationTimestamp="2023-09-11 10:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-11 10:59:20.367518426 +0000 UTC m=+1.128325168" watchObservedRunningTime="2023-09-11 10:59:20.371659092 +0000 UTC m=+1.132465835"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-012000 -n image-012000
helpers_test.go:261: (dbg) Run:  kubectl --context image-012000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-012000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-012000 describe pod storage-provisioner: exit status 1 (39.836958ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-012000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.05s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (57.05s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-937000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-937000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.795821334s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-937000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-937000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [fbe9712b-d80f-41af-b6dd-de4e026d57ce] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [fbe9712b-d80f-41af-b6dd-de4e026d57ce] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.014943125s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-937000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-937000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-937000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.105.6
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.040908625s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-937000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-937000 addons disable ingress-dns --alsologtostderr -v=1: (4.845251708s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-937000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-937000 addons disable ingress --alsologtostderr -v=1: (7.110851167s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-937000 -n ingress-addon-legacy-937000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-937000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|  Command   |                   Args                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh        | functional-740000 ssh sudo cat           | functional-740000           | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|            | /etc/ssl/certs/51391683.0                |                             |         |         |                     |                     |
	| ssh        | functional-740000 ssh sudo cat           | functional-740000           | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|            | /etc/ssl/certs/15652.pem                 |                             |         |         |                     |                     |
	| ssh        | functional-740000 ssh sudo cat           | functional-740000           | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|            | /usr/share/ca-certificates/15652.pem     |                             |         |         |                     |                     |
	| ssh        | functional-740000 ssh sudo cat           | functional-740000           | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|            | /etc/ssl/certs/3ec20f2e.0                |                             |         |         |                     |                     |
	| docker-env | functional-740000 docker-env             | functional-740000           | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	| docker-env | functional-740000 docker-env             | functional-740000           | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	| ssh        | functional-740000 ssh pgrep              | functional-740000           | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT |                     |
	|            | buildkitd                                |                             |         |         |                     |                     |
	| image      | functional-740000                        | functional-740000           | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|            | image ls --format json                   |                             |         |         |                     |                     |
	|            | --alsologtostderr                        |                             |         |         |                     |                     |
	| image      | functional-740000 image build -t         | functional-740000           | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|            | localhost/my-image:functional-740000     |                             |         |         |                     |                     |
	|            | testdata/build --alsologtostderr         |                             |         |         |                     |                     |
	| image      | functional-740000                        | functional-740000           | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	|            | image ls --format table                  |                             |         |         |                     |                     |
	|            | --alsologtostderr                        |                             |         |         |                     |                     |
	| image      | functional-740000 image ls               | functional-740000           | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	| delete     | -p functional-740000                     | functional-740000           | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:58 PDT |
	| start      | -p image-012000 --driver=qemu2           | image-012000                | jenkins | v1.31.2 | 11 Sep 23 03:58 PDT | 11 Sep 23 03:59 PDT |
	|            |                                          |                             |         |         |                     |                     |
	| image      | build -t aaa:latest                      | image-012000                | jenkins | v1.31.2 | 11 Sep 23 03:59 PDT | 11 Sep 23 03:59 PDT |
	|            | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|            | -p image-012000                          |                             |         |         |                     |                     |
	| image      | build -t aaa:latest                      | image-012000                | jenkins | v1.31.2 | 11 Sep 23 03:59 PDT | 11 Sep 23 03:59 PDT |
	|            | --build-opt=build-arg=ENV_A=test_env_str |                             |         |         |                     |                     |
	|            | --build-opt=no-cache                     |                             |         |         |                     |                     |
	|            | ./testdata/image-build/test-arg -p       |                             |         |         |                     |                     |
	|            | image-012000                             |                             |         |         |                     |                     |
	| image      | build -t aaa:latest                      | image-012000                | jenkins | v1.31.2 | 11 Sep 23 03:59 PDT | 11 Sep 23 03:59 PDT |
	|            | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|            | --build-opt=no-cache -p                  |                             |         |         |                     |                     |
	|            | image-012000                             |                             |         |         |                     |                     |
	| image      | build -t aaa:latest                      | image-012000                | jenkins | v1.31.2 | 11 Sep 23 03:59 PDT | 11 Sep 23 03:59 PDT |
	|            | -f inner/Dockerfile                      |                             |         |         |                     |                     |
	|            | ./testdata/image-build/test-f            |                             |         |         |                     |                     |
	|            | -p image-012000                          |                             |         |         |                     |                     |
	| delete     | -p image-012000                          | image-012000                | jenkins | v1.31.2 | 11 Sep 23 03:59 PDT | 11 Sep 23 03:59 PDT |
	| start      | -p ingress-addon-legacy-937000           | ingress-addon-legacy-937000 | jenkins | v1.31.2 | 11 Sep 23 03:59 PDT | 11 Sep 23 04:00 PDT |
	|            | --kubernetes-version=v1.18.20            |                             |         |         |                     |                     |
	|            | --memory=4096 --wait=true                |                             |         |         |                     |                     |
	|            | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	|            | --driver=qemu2                           |                             |         |         |                     |                     |
	| addons     | ingress-addon-legacy-937000              | ingress-addon-legacy-937000 | jenkins | v1.31.2 | 11 Sep 23 04:00 PDT | 11 Sep 23 04:01 PDT |
	|            | addons enable ingress                    |                             |         |         |                     |                     |
	|            | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| addons     | ingress-addon-legacy-937000              | ingress-addon-legacy-937000 | jenkins | v1.31.2 | 11 Sep 23 04:01 PDT | 11 Sep 23 04:01 PDT |
	|            | addons enable ingress-dns                |                             |         |         |                     |                     |
	|            | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| ssh        | ingress-addon-legacy-937000              | ingress-addon-legacy-937000 | jenkins | v1.31.2 | 11 Sep 23 04:01 PDT | 11 Sep 23 04:01 PDT |
	|            | ssh curl -s http://127.0.0.1/            |                             |         |         |                     |                     |
	|            | -H 'Host: nginx.example.com'             |                             |         |         |                     |                     |
	| ip         | ingress-addon-legacy-937000 ip           | ingress-addon-legacy-937000 | jenkins | v1.31.2 | 11 Sep 23 04:01 PDT | 11 Sep 23 04:01 PDT |
	| addons     | ingress-addon-legacy-937000              | ingress-addon-legacy-937000 | jenkins | v1.31.2 | 11 Sep 23 04:01 PDT | 11 Sep 23 04:02 PDT |
	|            | addons disable ingress-dns               |                             |         |         |                     |                     |
	|            | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| addons     | ingress-addon-legacy-937000              | ingress-addon-legacy-937000 | jenkins | v1.31.2 | 11 Sep 23 04:02 PDT | 11 Sep 23 04:02 PDT |
	|            | addons disable ingress                   |                             |         |         |                     |                     |
	|            | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	|------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 03:59:22
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 03:59:22.722416    2375 out.go:296] Setting OutFile to fd 1 ...
	I0911 03:59:22.722527    2375 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:59:22.722530    2375 out.go:309] Setting ErrFile to fd 2...
	I0911 03:59:22.722532    2375 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:59:22.722634    2375 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 03:59:22.723685    2375 out.go:303] Setting JSON to false
	I0911 03:59:22.738818    2375 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1736,"bootTime":1694428226,"procs":427,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 03:59:22.738876    2375 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 03:59:22.742983    2375 out.go:177] * [ingress-addon-legacy-937000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 03:59:22.749934    2375 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 03:59:22.752998    2375 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 03:59:22.750021    2375 notify.go:220] Checking for updates...
	I0911 03:59:22.765870    2375 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 03:59:22.766839    2375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 03:59:22.769902    2375 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 03:59:22.772889    2375 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 03:59:22.776110    2375 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 03:59:22.779893    2375 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 03:59:22.796827    2375 start.go:298] selected driver: qemu2
	I0911 03:59:22.796836    2375 start.go:902] validating driver "qemu2" against <nil>
	I0911 03:59:22.796845    2375 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 03:59:22.799022    2375 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 03:59:22.801900    2375 out.go:177] * Automatically selected the socket_vmnet network
	I0911 03:59:22.805019    2375 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 03:59:22.805043    2375 cni.go:84] Creating CNI manager for ""
	I0911 03:59:22.805050    2375 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0911 03:59:22.805055    2375 start_flags.go:321] config:
	{Name:ingress-addon-legacy-937000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-937000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 03:59:22.809867    2375 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 03:59:22.816880    2375 out.go:177] * Starting control plane node ingress-addon-legacy-937000 in cluster ingress-addon-legacy-937000
	I0911 03:59:22.820848    2375 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0911 03:59:22.881977    2375 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0911 03:59:22.882004    2375 cache.go:57] Caching tarball of preloaded images
	I0911 03:59:22.882208    2375 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0911 03:59:22.887878    2375 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0911 03:59:22.895900    2375 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0911 03:59:22.983623    2375 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0911 03:59:30.007050    2375 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0911 03:59:30.007202    2375 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0911 03:59:30.755813    2375 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0911 03:59:30.756010    2375 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/config.json ...
	I0911 03:59:30.756032    2375 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/config.json: {Name:mk7302ad2ed3b7d21f24999cdacfe57a9a6a73be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:59:30.756302    2375 start.go:365] acquiring machines lock for ingress-addon-legacy-937000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 03:59:30.756332    2375 start.go:369] acquired machines lock for "ingress-addon-legacy-937000" in 21.625µs
	I0911 03:59:30.756342    2375 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-937000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 03:59:30.756378    2375 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 03:59:30.766332    2375 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0911 03:59:30.780966    2375 start.go:159] libmachine.API.Create for "ingress-addon-legacy-937000" (driver="qemu2")
	I0911 03:59:30.780989    2375 client.go:168] LocalClient.Create starting
	I0911 03:59:30.781065    2375 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 03:59:30.781095    2375 main.go:141] libmachine: Decoding PEM data...
	I0911 03:59:30.781105    2375 main.go:141] libmachine: Parsing certificate...
	I0911 03:59:30.781146    2375 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 03:59:30.781164    2375 main.go:141] libmachine: Decoding PEM data...
	I0911 03:59:30.781171    2375 main.go:141] libmachine: Parsing certificate...
	I0911 03:59:30.781479    2375 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 03:59:30.996203    2375 main.go:141] libmachine: Creating SSH key...
	I0911 03:59:31.166546    2375 main.go:141] libmachine: Creating Disk image...
	I0911 03:59:31.166552    2375 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 03:59:31.166701    2375 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/ingress-addon-legacy-937000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/ingress-addon-legacy-937000/disk.qcow2
	I0911 03:59:31.175496    2375 main.go:141] libmachine: STDOUT: 
	I0911 03:59:31.175510    2375 main.go:141] libmachine: STDERR: 
	I0911 03:59:31.175569    2375 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/ingress-addon-legacy-937000/disk.qcow2 +20000M
	I0911 03:59:31.182903    2375 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 03:59:31.182918    2375 main.go:141] libmachine: STDERR: 
	I0911 03:59:31.182936    2375 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/ingress-addon-legacy-937000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/ingress-addon-legacy-937000/disk.qcow2
	I0911 03:59:31.182940    2375 main.go:141] libmachine: Starting QEMU VM...
	I0911 03:59:31.182974    2375 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/ingress-addon-legacy-937000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/ingress-addon-legacy-937000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/ingress-addon-legacy-937000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:e3:1f:ef:d9:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/ingress-addon-legacy-937000/disk.qcow2
	I0911 03:59:31.217324    2375 main.go:141] libmachine: STDOUT: 
	I0911 03:59:31.217365    2375 main.go:141] libmachine: STDERR: 
	I0911 03:59:31.217369    2375 main.go:141] libmachine: Attempt 0
	I0911 03:59:31.217389    2375 main.go:141] libmachine: Searching for aa:e3:1f:ef:d9:38 in /var/db/dhcpd_leases ...
	I0911 03:59:31.217455    2375 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0911 03:59:31.217473    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8e:49:22:5b:da:de ID:1,8e:49:22:5b:da:de Lease:0x65004475}
	I0911 03:59:31.217479    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:62:99:da:56:de:73 ID:1,62:99:da:56:de:73 Lease:0x650043a6}
	I0911 03:59:31.217485    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:be:d8:6:ae:f2:7b ID:1,be:d8:6:ae:f2:7b Lease:0x64fef219}
	I0911 03:59:31.217490    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:8d:15:a0:6f:df ID:1,1a:8d:15:a0:6f:df Lease:0x65004356}
	I0911 03:59:33.219602    2375 main.go:141] libmachine: Attempt 1
	I0911 03:59:33.219678    2375 main.go:141] libmachine: Searching for aa:e3:1f:ef:d9:38 in /var/db/dhcpd_leases ...
	I0911 03:59:33.220115    2375 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0911 03:59:33.220165    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8e:49:22:5b:da:de ID:1,8e:49:22:5b:da:de Lease:0x65004475}
	I0911 03:59:33.220204    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:62:99:da:56:de:73 ID:1,62:99:da:56:de:73 Lease:0x650043a6}
	I0911 03:59:33.220235    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:be:d8:6:ae:f2:7b ID:1,be:d8:6:ae:f2:7b Lease:0x64fef219}
	I0911 03:59:33.220264    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:8d:15:a0:6f:df ID:1,1a:8d:15:a0:6f:df Lease:0x65004356}
	I0911 03:59:35.222293    2375 main.go:141] libmachine: Attempt 2
	I0911 03:59:35.222332    2375 main.go:141] libmachine: Searching for aa:e3:1f:ef:d9:38 in /var/db/dhcpd_leases ...
	I0911 03:59:35.222446    2375 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0911 03:59:35.222458    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8e:49:22:5b:da:de ID:1,8e:49:22:5b:da:de Lease:0x65004475}
	I0911 03:59:35.222464    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:62:99:da:56:de:73 ID:1,62:99:da:56:de:73 Lease:0x650043a6}
	I0911 03:59:35.222470    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:be:d8:6:ae:f2:7b ID:1,be:d8:6:ae:f2:7b Lease:0x64fef219}
	I0911 03:59:35.222475    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:8d:15:a0:6f:df ID:1,1a:8d:15:a0:6f:df Lease:0x65004356}
	I0911 03:59:37.224461    2375 main.go:141] libmachine: Attempt 3
	I0911 03:59:37.224472    2375 main.go:141] libmachine: Searching for aa:e3:1f:ef:d9:38 in /var/db/dhcpd_leases ...
	I0911 03:59:37.224580    2375 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0911 03:59:37.224593    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8e:49:22:5b:da:de ID:1,8e:49:22:5b:da:de Lease:0x65004475}
	I0911 03:59:37.224599    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:62:99:da:56:de:73 ID:1,62:99:da:56:de:73 Lease:0x650043a6}
	I0911 03:59:37.224605    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:be:d8:6:ae:f2:7b ID:1,be:d8:6:ae:f2:7b Lease:0x64fef219}
	I0911 03:59:37.224610    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:8d:15:a0:6f:df ID:1,1a:8d:15:a0:6f:df Lease:0x65004356}
	I0911 03:59:39.226597    2375 main.go:141] libmachine: Attempt 4
	I0911 03:59:39.226604    2375 main.go:141] libmachine: Searching for aa:e3:1f:ef:d9:38 in /var/db/dhcpd_leases ...
	I0911 03:59:39.226632    2375 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0911 03:59:39.226639    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8e:49:22:5b:da:de ID:1,8e:49:22:5b:da:de Lease:0x65004475}
	I0911 03:59:39.226643    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:62:99:da:56:de:73 ID:1,62:99:da:56:de:73 Lease:0x650043a6}
	I0911 03:59:39.226677    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:be:d8:6:ae:f2:7b ID:1,be:d8:6:ae:f2:7b Lease:0x64fef219}
	I0911 03:59:39.226684    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:8d:15:a0:6f:df ID:1,1a:8d:15:a0:6f:df Lease:0x65004356}
	I0911 03:59:41.228675    2375 main.go:141] libmachine: Attempt 5
	I0911 03:59:41.228705    2375 main.go:141] libmachine: Searching for aa:e3:1f:ef:d9:38 in /var/db/dhcpd_leases ...
	I0911 03:59:41.228784    2375 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0911 03:59:41.228794    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:8e:49:22:5b:da:de ID:1,8e:49:22:5b:da:de Lease:0x65004475}
	I0911 03:59:41.228799    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:62:99:da:56:de:73 ID:1,62:99:da:56:de:73 Lease:0x650043a6}
	I0911 03:59:41.228804    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:be:d8:6:ae:f2:7b ID:1,be:d8:6:ae:f2:7b Lease:0x64fef219}
	I0911 03:59:41.228809    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:8d:15:a0:6f:df ID:1,1a:8d:15:a0:6f:df Lease:0x65004356}
	I0911 03:59:43.230860    2375 main.go:141] libmachine: Attempt 6
	I0911 03:59:43.230919    2375 main.go:141] libmachine: Searching for aa:e3:1f:ef:d9:38 in /var/db/dhcpd_leases ...
	I0911 03:59:43.231056    2375 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0911 03:59:43.231088    2375 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:aa:e3:1f:ef:d9:38 ID:1,aa:e3:1f:ef:d9:38 Lease:0x6500449e}
	I0911 03:59:43.231096    2375 main.go:141] libmachine: Found match: aa:e3:1f:ef:d9:38
	I0911 03:59:43.231110    2375 main.go:141] libmachine: IP: 192.168.105.6
	I0911 03:59:43.231118    2375 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0911 03:59:44.237023    2375 machine.go:88] provisioning docker machine ...
	I0911 03:59:44.237045    2375 buildroot.go:166] provisioning hostname "ingress-addon-legacy-937000"
	I0911 03:59:44.237084    2375 main.go:141] libmachine: Using SSH client type: native
	I0911 03:59:44.237347    2375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10284e3b0] 0x102850e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0911 03:59:44.237354    2375 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-937000 && echo "ingress-addon-legacy-937000" | sudo tee /etc/hostname
	I0911 03:59:44.300053    2375 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-937000
	
	I0911 03:59:44.300117    2375 main.go:141] libmachine: Using SSH client type: native
	I0911 03:59:44.300370    2375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10284e3b0] 0x102850e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0911 03:59:44.300382    2375 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-937000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-937000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-937000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 03:59:44.362480    2375 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 03:59:44.362492    2375 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17223-1124/.minikube CaCertPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17223-1124/.minikube}
	I0911 03:59:44.362500    2375 buildroot.go:174] setting up certificates
	I0911 03:59:44.362505    2375 provision.go:83] configureAuth start
	I0911 03:59:44.362509    2375 provision.go:138] copyHostCerts
	I0911 03:59:44.362537    2375 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cert.pem
	I0911 03:59:44.362577    2375 exec_runner.go:144] found /Users/jenkins/minikube-integration/17223-1124/.minikube/cert.pem, removing ...
	I0911 03:59:44.362582    2375 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17223-1124/.minikube/cert.pem
	I0911 03:59:44.362709    2375 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17223-1124/.minikube/cert.pem (1123 bytes)
	I0911 03:59:44.362861    2375 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17223-1124/.minikube/key.pem
	I0911 03:59:44.362882    2375 exec_runner.go:144] found /Users/jenkins/minikube-integration/17223-1124/.minikube/key.pem, removing ...
	I0911 03:59:44.362884    2375 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17223-1124/.minikube/key.pem
	I0911 03:59:44.362933    2375 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17223-1124/.minikube/key.pem (1679 bytes)
	I0911 03:59:44.363011    2375 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.pem
	I0911 03:59:44.363028    2375 exec_runner.go:144] found /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.pem, removing ...
	I0911 03:59:44.363030    2375 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.pem
	I0911 03:59:44.363072    2375 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.pem (1078 bytes)
	I0911 03:59:44.363156    2375 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-937000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-937000]
	I0911 03:59:44.444162    2375 provision.go:172] copyRemoteCerts
	I0911 03:59:44.444190    2375 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 03:59:44.444197    2375 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/ingress-addon-legacy-937000/id_rsa Username:docker}
	I0911 03:59:44.478070    2375 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0911 03:59:44.478124    2375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0911 03:59:44.484712    2375 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0911 03:59:44.484751    2375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0911 03:59:44.491281    2375 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0911 03:59:44.491320    2375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 03:59:44.497993    2375 provision.go:86] duration metric: configureAuth took 135.482792ms
	I0911 03:59:44.498000    2375 buildroot.go:189] setting minikube options for container-runtime
	I0911 03:59:44.498112    2375 config.go:182] Loaded profile config "ingress-addon-legacy-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0911 03:59:44.498147    2375 main.go:141] libmachine: Using SSH client type: native
	I0911 03:59:44.498367    2375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10284e3b0] 0x102850e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0911 03:59:44.498375    2375 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0911 03:59:44.556741    2375 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0911 03:59:44.556750    2375 buildroot.go:70] root file system type: tmpfs
	I0911 03:59:44.556809    2375 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0911 03:59:44.556853    2375 main.go:141] libmachine: Using SSH client type: native
	I0911 03:59:44.557092    2375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10284e3b0] 0x102850e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0911 03:59:44.557132    2375 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0911 03:59:44.624387    2375 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0911 03:59:44.624434    2375 main.go:141] libmachine: Using SSH client type: native
	I0911 03:59:44.624697    2375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10284e3b0] 0x102850e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0911 03:59:44.624710    2375 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0911 03:59:44.959339    2375 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0911 03:59:44.959359    2375 machine.go:91] provisioned docker machine in 722.345458ms
	I0911 03:59:44.959364    2375 client.go:171] LocalClient.Create took 14.178731042s
	I0911 03:59:44.959378    2375 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-937000" took 14.178774875s
	I0911 03:59:44.959385    2375 start.go:300] post-start starting for "ingress-addon-legacy-937000" (driver="qemu2")
	I0911 03:59:44.959393    2375 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 03:59:44.959456    2375 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 03:59:44.959465    2375 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/ingress-addon-legacy-937000/id_rsa Username:docker}
	I0911 03:59:44.992794    2375 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 03:59:44.994198    2375 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 03:59:44.994204    2375 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17223-1124/.minikube/addons for local assets ...
	I0911 03:59:44.994278    2375 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17223-1124/.minikube/files for local assets ...
	I0911 03:59:44.994384    2375 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/ssl/certs/15652.pem -> 15652.pem in /etc/ssl/certs
	I0911 03:59:44.994391    2375 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/ssl/certs/15652.pem -> /etc/ssl/certs/15652.pem
	I0911 03:59:44.994497    2375 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 03:59:44.997619    2375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/ssl/certs/15652.pem --> /etc/ssl/certs/15652.pem (1708 bytes)
	I0911 03:59:45.004868    2375 start.go:303] post-start completed in 45.476709ms
	I0911 03:59:45.005243    2375 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/config.json ...
	I0911 03:59:45.005400    2375 start.go:128] duration metric: createHost completed in 14.249380333s
	I0911 03:59:45.005433    2375 main.go:141] libmachine: Using SSH client type: native
	I0911 03:59:45.005657    2375 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10284e3b0] 0x102850e10 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0911 03:59:45.005663    2375 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 03:59:45.064549    2375 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694429985.438886710
	
	I0911 03:59:45.064558    2375 fix.go:206] guest clock: 1694429985.438886710
	I0911 03:59:45.064562    2375 fix.go:219] Guest: 2023-09-11 03:59:45.43888671 -0700 PDT Remote: 2023-09-11 03:59:45.005403 -0700 PDT m=+22.303167542 (delta=433.48371ms)
	I0911 03:59:45.064574    2375 fix.go:190] guest clock delta is within tolerance: 433.48371ms
	I0911 03:59:45.064576    2375 start.go:83] releasing machines lock for "ingress-addon-legacy-937000", held for 14.308603833s
	I0911 03:59:45.064891    2375 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 03:59:45.064914    2375 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/ingress-addon-legacy-937000/id_rsa Username:docker}
	I0911 03:59:45.064891    2375 ssh_runner.go:195] Run: cat /version.json
	I0911 03:59:45.064930    2375 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/ingress-addon-legacy-937000/id_rsa Username:docker}
	I0911 03:59:45.135630    2375 ssh_runner.go:195] Run: systemctl --version
	I0911 03:59:45.137745    2375 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 03:59:45.139565    2375 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 03:59:45.139602    2375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0911 03:59:45.143033    2375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0911 03:59:45.148149    2375 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 03:59:45.148156    2375 start.go:466] detecting cgroup driver to use...
	I0911 03:59:45.148230    2375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 03:59:45.155060    2375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0911 03:59:45.158599    2375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0911 03:59:45.161967    2375 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0911 03:59:45.161996    2375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0911 03:59:45.164757    2375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0911 03:59:45.167722    2375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0911 03:59:45.170936    2375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0911 03:59:45.174249    2375 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 03:59:45.177257    2375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0911 03:59:45.180136    2375 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 03:59:45.183178    2375 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 03:59:45.186249    2375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:59:45.254131    2375 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0911 03:59:45.261308    2375 start.go:466] detecting cgroup driver to use...
	I0911 03:59:45.261375    2375 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0911 03:59:45.267102    2375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 03:59:45.272345    2375 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 03:59:45.285029    2375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 03:59:45.289360    2375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0911 03:59:45.294030    2375 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0911 03:59:45.330190    2375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0911 03:59:45.335429    2375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 03:59:45.340873    2375 ssh_runner.go:195] Run: which cri-dockerd
	I0911 03:59:45.342132    2375 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0911 03:59:45.344740    2375 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0911 03:59:45.349714    2375 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0911 03:59:45.430705    2375 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0911 03:59:45.495317    2375 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0911 03:59:45.495332    2375 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0911 03:59:45.500901    2375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:59:45.584515    2375 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0911 03:59:46.743294    2375 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.158791708s)
	I0911 03:59:46.743377    2375 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0911 03:59:46.758847    2375 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0911 03:59:46.774817    2375 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.5 ...
	I0911 03:59:46.774907    2375 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0911 03:59:46.776354    2375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 03:59:46.779972    2375 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0911 03:59:46.780015    2375 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0911 03:59:46.785481    2375 docker.go:636] Got preloaded images: 
	I0911 03:59:46.785489    2375 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0911 03:59:46.785536    2375 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0911 03:59:46.788339    2375 ssh_runner.go:195] Run: which lz4
	I0911 03:59:46.789596    2375 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0911 03:59:46.789685    2375 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 03:59:46.790878    2375 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 03:59:46.790891    2375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0911 03:59:48.482913    2375 docker.go:600] Took 1.693311 seconds to copy over tarball
	I0911 03:59:48.482967    2375 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 03:59:49.773414    2375 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.290463667s)
	I0911 03:59:49.773427    2375 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 03:59:49.794865    2375 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0911 03:59:49.799310    2375 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0911 03:59:49.808009    2375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 03:59:49.891820    2375 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0911 03:59:51.377334    2375 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.485535708s)
	I0911 03:59:51.377437    2375 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0911 03:59:51.383303    2375 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0911 03:59:51.383311    2375 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0911 03:59:51.383315    2375 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0911 03:59:51.394854    2375 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0911 03:59:51.394901    2375 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0911 03:59:51.395741    2375 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 03:59:51.395822    2375 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0911 03:59:51.396127    2375 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0911 03:59:51.396222    2375 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0911 03:59:51.396939    2375 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0911 03:59:51.398937    2375 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0911 03:59:51.405689    2375 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 03:59:51.405750    2375 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0911 03:59:51.405820    2375 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0911 03:59:51.406602    2375 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0911 03:59:51.406812    2375 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0911 03:59:51.406858    2375 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0911 03:59:51.406928    2375 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0911 03:59:51.407728    2375 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	W0911 03:59:52.223899    2375 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0911 03:59:52.223997    2375 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0911 03:59:52.230095    2375 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0911 03:59:52.230123    2375 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0911 03:59:52.230165    2375 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0911 03:59:52.237719    2375 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	W0911 03:59:52.281117    2375 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0911 03:59:52.281232    2375 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0911 03:59:52.287514    2375 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0911 03:59:52.287538    2375 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.7
	I0911 03:59:52.287585    2375 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0911 03:59:52.293918    2375 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	W0911 03:59:52.350330    2375 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0911 03:59:52.350439    2375 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 03:59:52.356989    2375 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0911 03:59:52.357010    2375 docker.go:316] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 03:59:52.357054    2375 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 03:59:52.370015    2375 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	W0911 03:59:52.442750    2375 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0911 03:59:52.442897    2375 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0911 03:59:52.452999    2375 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0911 03:59:52.453023    2375 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0911 03:59:52.453060    2375 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0911 03:59:52.458638    2375 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	W0911 03:59:52.666103    2375 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0911 03:59:52.666225    2375 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0911 03:59:52.672725    2375 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0911 03:59:52.672751    2375 docker.go:316] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0911 03:59:52.672793    2375 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0911 03:59:52.678900    2375 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	W0911 03:59:52.882639    2375 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0911 03:59:52.882761    2375 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0911 03:59:52.888986    2375 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0911 03:59:52.889011    2375 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0911 03:59:52.889062    2375 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0911 03:59:52.895280    2375 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	W0911 03:59:53.114308    2375 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0911 03:59:53.114446    2375 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0911 03:59:53.128917    2375 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0911 03:59:53.128939    2375 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0911 03:59:53.128983    2375 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0911 03:59:53.134443    2375 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0911 03:59:53.300357    2375 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0911 03:59:53.316790    2375 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0911 03:59:53.316834    2375 docker.go:316] Removing image: registry.k8s.io/pause:3.2
	I0911 03:59:53.316926    2375 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0911 03:59:53.333250    2375 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I0911 03:59:53.333311    2375 cache_images.go:92] LoadImages completed in 1.950039167s
	W0911 03:59:53.333407    2375 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	I0911 03:59:53.333491    2375 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0911 03:59:53.346196    2375 cni.go:84] Creating CNI manager for ""
	I0911 03:59:53.346221    2375 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0911 03:59:53.346248    2375 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 03:59:53.346260    2375 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-937000 NodeName:ingress-addon-legacy-937000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0911 03:59:53.346379    2375 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-937000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 03:59:53.346435    2375 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-937000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-937000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 03:59:53.346510    2375 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0911 03:59:53.351102    2375 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 03:59:53.351136    2375 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 03:59:53.354990    2375 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0911 03:59:53.361165    2375 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0911 03:59:53.367123    2375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I0911 03:59:53.372866    2375 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I0911 03:59:53.374097    2375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 03:59:53.378017    2375 certs.go:56] Setting up /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000 for IP: 192.168.105.6
	I0911 03:59:53.378033    2375 certs.go:190] acquiring lock for shared ca certs: {Name:mk38c09806021c18792511eb48bf232ccb80ec29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:59:53.378197    2375 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.key
	I0911 03:59:53.378241    2375 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17223-1124/.minikube/proxy-client-ca.key
	I0911 03:59:53.378266    2375 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.key
	I0911 03:59:53.378276    2375 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.crt with IP's: []
	I0911 03:59:53.470353    2375 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.crt ...
	I0911 03:59:53.470358    2375 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.crt: {Name:mk2f021357102006272680638dddf717fd23cd03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:59:53.470582    2375 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.key ...
	I0911 03:59:53.470585    2375 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.key: {Name:mk7d98f98cf229f72f173dbf4ed954c8a506fc07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:59:53.470704    2375 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/apiserver.key.b354f644
	I0911 03:59:53.470710    2375 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I0911 03:59:53.600658    2375 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/apiserver.crt.b354f644 ...
	I0911 03:59:53.600661    2375 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/apiserver.crt.b354f644: {Name:mk406b2c6cc2fc03b5322f43bfc82db200f8ecb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:59:53.600808    2375 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/apiserver.key.b354f644 ...
	I0911 03:59:53.600814    2375 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/apiserver.key.b354f644: {Name:mk8378c2ecba73b6362645f20bd49985c1440521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:59:53.600919    2375 certs.go:337] copying /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/apiserver.crt
	I0911 03:59:53.601141    2375 certs.go:341] copying /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/apiserver.key
	I0911 03:59:53.601243    2375 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/proxy-client.key
	I0911 03:59:53.601249    2375 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/proxy-client.crt with IP's: []
	I0911 03:59:53.672129    2375 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/proxy-client.crt ...
	I0911 03:59:53.672133    2375 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/proxy-client.crt: {Name:mkbc1c382180ef2b421950c1404ef28b2a8c4f39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:59:53.672267    2375 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/proxy-client.key ...
	I0911 03:59:53.672270    2375 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/proxy-client.key: {Name:mkfb90c05c7e12d236e821fc0e8071201e235541 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:59:53.672379    2375 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0911 03:59:53.672393    2375 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0911 03:59:53.672405    2375 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0911 03:59:53.672416    2375 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0911 03:59:53.672427    2375 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0911 03:59:53.672447    2375 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0911 03:59:53.672458    2375 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17223-1124/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0911 03:59:53.672473    2375 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17223-1124/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0911 03:59:53.672561    2375 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/1565.pem (1338 bytes)
	W0911 03:59:53.672603    2375 certs.go:433] ignoring /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/1565_empty.pem, impossibly tiny 0 bytes
	I0911 03:59:53.672612    2375 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 03:59:53.672635    2375 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem (1078 bytes)
	I0911 03:59:53.672657    2375 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem (1123 bytes)
	I0911 03:59:53.672699    2375 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/certs/key.pem (1679 bytes)
	I0911 03:59:53.672753    2375 certs.go:437] found cert: /Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/ssl/certs/15652.pem (1708 bytes)
	I0911 03:59:53.672778    2375 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/ssl/certs/15652.pem -> /usr/share/ca-certificates/15652.pem
	I0911 03:59:53.672790    2375 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0911 03:59:53.672801    2375 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/1565.pem -> /usr/share/ca-certificates/1565.pem
	I0911 03:59:53.673182    2375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 03:59:53.680808    2375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0911 03:59:53.687873    2375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 03:59:53.694950    2375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0911 03:59:53.701945    2375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 03:59:53.708610    2375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 03:59:53.715287    2375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 03:59:53.722382    2375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0911 03:59:53.729344    2375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/ssl/certs/15652.pem --> /usr/share/ca-certificates/15652.pem (1708 bytes)
	I0911 03:59:53.735875    2375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 03:59:53.742941    2375 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/1565.pem --> /usr/share/ca-certificates/1565.pem (1338 bytes)
	I0911 03:59:53.750055    2375 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 03:59:53.755011    2375 ssh_runner.go:195] Run: openssl version
	I0911 03:59:53.756893    2375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15652.pem && ln -fs /usr/share/ca-certificates/15652.pem /etc/ssl/certs/15652.pem"
	I0911 03:59:53.760173    2375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15652.pem
	I0911 03:59:53.761629    2375 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 10:55 /usr/share/ca-certificates/15652.pem
	I0911 03:59:53.761655    2375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15652.pem
	I0911 03:59:53.763411    2375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15652.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 03:59:53.766652    2375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 03:59:53.770025    2375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 03:59:53.771614    2375 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:54 /usr/share/ca-certificates/minikubeCA.pem
	I0911 03:59:53.771631    2375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 03:59:53.773347    2375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 03:59:53.776309    2375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1565.pem && ln -fs /usr/share/ca-certificates/1565.pem /etc/ssl/certs/1565.pem"
	I0911 03:59:53.779337    2375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1565.pem
	I0911 03:59:53.780692    2375 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 10:55 /usr/share/ca-certificates/1565.pem
	I0911 03:59:53.780718    2375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1565.pem
	I0911 03:59:53.782442    2375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1565.pem /etc/ssl/certs/51391683.0"
	I0911 03:59:53.785470    2375 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 03:59:53.786696    2375 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 03:59:53.786725    2375 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-937000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.18.20 ClusterName:ingress-addon-legacy-937000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 03:59:53.786789    2375 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0911 03:59:53.792194    2375 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 03:59:53.795336    2375 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 03:59:53.798474    2375 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 03:59:53.801357    2375 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 03:59:53.801370    2375 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0911 03:59:53.830088    2375 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0911 03:59:53.831067    2375 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 03:59:53.924866    2375 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 03:59:53.924953    2375 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 03:59:53.925000    2375 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 03:59:53.972860    2375 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 03:59:53.973513    2375 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 03:59:53.973533    2375 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 03:59:54.055172    2375 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 03:59:54.062327    2375 out.go:204]   - Generating certificates and keys ...
	I0911 03:59:54.062371    2375 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 03:59:54.062401    2375 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 03:59:54.177042    2375 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0911 03:59:54.217997    2375 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0911 03:59:54.309418    2375 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0911 03:59:54.434863    2375 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0911 03:59:54.522345    2375 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0911 03:59:54.522473    2375 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-937000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0911 03:59:54.590581    2375 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0911 03:59:54.590650    2375 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-937000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0911 03:59:54.888546    2375 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0911 03:59:54.966477    2375 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0911 03:59:55.082524    2375 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0911 03:59:55.083592    2375 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 03:59:55.149779    2375 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 03:59:55.192618    2375 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 03:59:55.270022    2375 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 03:59:55.325133    2375 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 03:59:55.325459    2375 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 03:59:55.329607    2375 out.go:204]   - Booting up control plane ...
	I0911 03:59:55.329661    2375 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 03:59:55.329707    2375 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 03:59:55.329745    2375 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 03:59:55.329900    2375 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 03:59:55.332195    2375 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 04:00:05.833619    2375 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.501560 seconds
	I0911 04:00:05.833702    2375 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 04:00:05.841536    2375 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 04:00:06.371511    2375 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 04:00:06.371721    2375 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-937000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0911 04:00:06.874606    2375 kubeadm.go:322] [bootstrap-token] Using token: 4wdcwq.j3fmaqa8pscxxzck
	I0911 04:00:06.883843    2375 out.go:204]   - Configuring RBAC rules ...
	I0911 04:00:06.883909    2375 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 04:00:06.883980    2375 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 04:00:06.884992    2375 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 04:00:06.890622    2375 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 04:00:06.891523    2375 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 04:00:06.892380    2375 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 04:00:06.895546    2375 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 04:00:07.055634    2375 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 04:00:07.294808    2375 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 04:00:07.295505    2375 kubeadm.go:322] 
	I0911 04:00:07.295562    2375 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 04:00:07.295570    2375 kubeadm.go:322] 
	I0911 04:00:07.295630    2375 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 04:00:07.295635    2375 kubeadm.go:322] 
	I0911 04:00:07.295654    2375 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 04:00:07.295704    2375 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 04:00:07.295773    2375 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 04:00:07.295791    2375 kubeadm.go:322] 
	I0911 04:00:07.295844    2375 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 04:00:07.295937    2375 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 04:00:07.295996    2375 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 04:00:07.296007    2375 kubeadm.go:322] 
	I0911 04:00:07.296096    2375 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 04:00:07.296191    2375 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 04:00:07.296203    2375 kubeadm.go:322] 
	I0911 04:00:07.296282    2375 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 4wdcwq.j3fmaqa8pscxxzck \
	I0911 04:00:07.296428    2375 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:77399ad9541b4667fda28bf9bf29366ef8ebe6fdc39d6e893157dd935cb9f38b \
	I0911 04:00:07.296450    2375 kubeadm.go:322]     --control-plane 
	I0911 04:00:07.296459    2375 kubeadm.go:322] 
	I0911 04:00:07.296531    2375 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 04:00:07.296543    2375 kubeadm.go:322] 
	I0911 04:00:07.296622    2375 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 4wdcwq.j3fmaqa8pscxxzck \
	I0911 04:00:07.296756    2375 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:77399ad9541b4667fda28bf9bf29366ef8ebe6fdc39d6e893157dd935cb9f38b 
	I0911 04:00:07.297044    2375 kubeadm.go:322] W0911 10:59:54.204410    1408 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0911 04:00:07.297219    2375 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0911 04:00:07.297335    2375 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.5. Latest validated version: 19.03
	I0911 04:00:07.297417    2375 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 04:00:07.297514    2375 kubeadm.go:322] W0911 10:59:55.703519    1408 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0911 04:00:07.297611    2375 kubeadm.go:322] W0911 10:59:55.704039    1408 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0911 04:00:07.297626    2375 cni.go:84] Creating CNI manager for ""
	I0911 04:00:07.297638    2375 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0911 04:00:07.297652    2375 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 04:00:07.297768    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:07.297775    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9 minikube.k8s.io/name=ingress-addon-legacy-937000 minikube.k8s.io/updated_at=2023_09_11T04_00_07_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:07.303654    2375 ops.go:34] apiserver oom_adj: -16
	I0911 04:00:07.381269    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:07.413961    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:07.954194    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:08.454168    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:08.954232    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:09.454113    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:09.954211    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:10.453964    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:10.954149    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:11.454160    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:11.954047    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:12.453992    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:12.953971    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:13.454022    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:13.954008    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:14.454078    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:14.954048    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:15.454005    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:15.953756    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:16.454003    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:16.953904    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:17.453861    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:17.953975    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:18.453894    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:18.953985    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:19.453929    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:19.953862    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:20.453842    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:20.953696    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:21.453251    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:21.953658    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:22.453610    2375 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 04:00:22.558065    2375 kubeadm.go:1081] duration metric: took 15.260764709s to wait for elevateKubeSystemPrivileges.
	I0911 04:00:22.558089    2375 kubeadm.go:406] StartCluster complete in 28.772088208s
	I0911 04:00:22.558099    2375 settings.go:142] acquiring lock: {Name:mk1469232b3abbdcc69ed77e286fb2789adb44fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:00:22.558187    2375 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:00:22.558572    2375 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/kubeconfig: {Name:mk8b43c711db1489632c69fe978a061a5dcf6734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:00:22.558759    2375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 04:00:22.558902    2375 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 04:00:22.558930    2375 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-937000"
	I0911 04:00:22.558939    2375 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-937000"
	I0911 04:00:22.558963    2375 host.go:66] Checking if "ingress-addon-legacy-937000" exists ...
	I0911 04:00:22.558973    2375 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-937000"
	I0911 04:00:22.558981    2375 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-937000"
	I0911 04:00:22.559027    2375 kapi.go:59] client config for ingress-addon-legacy-937000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.key", CAFile:"/Users/jenkins/minikube-integration/17223-1124/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103c09d70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 04:00:22.559164    2375 config.go:182] Loaded profile config "ingress-addon-legacy-937000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0911 04:00:22.559469    2375 cert_rotation.go:137] Starting client certificate rotation controller
	I0911 04:00:22.560033    2375 kapi.go:59] client config for ingress-addon-legacy-937000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.key", CAFile:"/Users/jenkins/minikube-integration/17223-1124/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103c09d70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 04:00:22.564606    2375 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 04:00:22.567639    2375 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 04:00:22.567645    2375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 04:00:22.567653    2375 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/ingress-addon-legacy-937000/id_rsa Username:docker}
	I0911 04:00:22.578559    2375 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-937000"
	I0911 04:00:22.578577    2375 host.go:66] Checking if "ingress-addon-legacy-937000" exists ...
	I0911 04:00:22.579273    2375 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 04:00:22.579279    2375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 04:00:22.579285    2375 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/ingress-addon-legacy-937000/id_rsa Username:docker}
	I0911 04:00:22.592842    2375 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-937000" context rescaled to 1 replicas
	I0911 04:00:22.592873    2375 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:00:22.598648    2375 out.go:177] * Verifying Kubernetes components...
	I0911 04:00:22.605598    2375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 04:00:22.631281    2375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 04:00:22.658487    2375 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0911 04:00:22.658596    2375 kapi.go:59] client config for ingress-addon-legacy-937000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.key", CAFile:"/Users/jenkins/minikube-integration/17223-1124/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103c09d70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 04:00:22.658736    2375 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-937000" to be "Ready" ...
	I0911 04:00:22.660662    2375 node_ready.go:49] node "ingress-addon-legacy-937000" has status "Ready":"True"
	I0911 04:00:22.660673    2375 node_ready.go:38] duration metric: took 1.928833ms waiting for node "ingress-addon-legacy-937000" to be "Ready" ...
	I0911 04:00:22.660676    2375 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 04:00:22.664281    2375 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-mq2jc" in "kube-system" namespace to be "Ready" ...
	I0911 04:00:22.739992    2375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 04:00:22.869472    2375 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0911 04:00:22.899906    2375 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0911 04:00:22.909541    2375 addons.go:502] enable addons completed in 350.647292ms: enabled=[storage-provisioner default-storageclass]
	I0911 04:00:24.675275    2375 pod_ready.go:102] pod "coredns-66bff467f8-mq2jc" in "kube-system" namespace has status "Ready":"False"
	I0911 04:00:26.683309    2375 pod_ready.go:102] pod "coredns-66bff467f8-mq2jc" in "kube-system" namespace has status "Ready":"False"
	I0911 04:00:29.183541    2375 pod_ready.go:102] pod "coredns-66bff467f8-mq2jc" in "kube-system" namespace has status "Ready":"False"
	I0911 04:00:31.683767    2375 pod_ready.go:102] pod "coredns-66bff467f8-mq2jc" in "kube-system" namespace has status "Ready":"False"
	I0911 04:00:34.185124    2375 pod_ready.go:102] pod "coredns-66bff467f8-mq2jc" in "kube-system" namespace has status "Ready":"False"
	I0911 04:00:36.683079    2375 pod_ready.go:102] pod "coredns-66bff467f8-mq2jc" in "kube-system" namespace has status "Ready":"False"
	I0911 04:00:38.684428    2375 pod_ready.go:102] pod "coredns-66bff467f8-mq2jc" in "kube-system" namespace has status "Ready":"False"
	I0911 04:00:41.183862    2375 pod_ready.go:102] pod "coredns-66bff467f8-mq2jc" in "kube-system" namespace has status "Ready":"False"
	I0911 04:00:43.681166    2375 pod_ready.go:102] pod "coredns-66bff467f8-mq2jc" in "kube-system" namespace has status "Ready":"False"
	I0911 04:00:45.683540    2375 pod_ready.go:102] pod "coredns-66bff467f8-mq2jc" in "kube-system" namespace has status "Ready":"False"
	I0911 04:00:48.178714    2375 pod_ready.go:102] pod "coredns-66bff467f8-mq2jc" in "kube-system" namespace has status "Ready":"False"
	I0911 04:00:50.182105    2375 pod_ready.go:102] pod "coredns-66bff467f8-mq2jc" in "kube-system" namespace has status "Ready":"False"
	I0911 04:00:52.681578    2375 pod_ready.go:102] pod "coredns-66bff467f8-mq2jc" in "kube-system" namespace has status "Ready":"False"
	I0911 04:00:54.683677    2375 pod_ready.go:102] pod "coredns-66bff467f8-mq2jc" in "kube-system" namespace has status "Ready":"False"
	I0911 04:00:57.182221    2375 pod_ready.go:102] pod "coredns-66bff467f8-mq2jc" in "kube-system" namespace has status "Ready":"False"
	I0911 04:00:57.674947    2375 pod_ready.go:92] pod "coredns-66bff467f8-mq2jc" in "kube-system" namespace has status "Ready":"True"
	I0911 04:00:57.674968    2375 pod_ready.go:81] duration metric: took 35.011565334s waiting for pod "coredns-66bff467f8-mq2jc" in "kube-system" namespace to be "Ready" ...
	I0911 04:00:57.674979    2375 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-rkbz2" in "kube-system" namespace to be "Ready" ...
	I0911 04:00:57.676469    2375 pod_ready.go:97] error getting pod "coredns-66bff467f8-rkbz2" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-rkbz2" not found
	I0911 04:00:57.676481    2375 pod_ready.go:81] duration metric: took 1.496584ms waiting for pod "coredns-66bff467f8-rkbz2" in "kube-system" namespace to be "Ready" ...
	E0911 04:00:57.676488    2375 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-rkbz2" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-rkbz2" not found
	I0911 04:00:57.676494    2375 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-937000" in "kube-system" namespace to be "Ready" ...
	I0911 04:00:57.680284    2375 pod_ready.go:92] pod "etcd-ingress-addon-legacy-937000" in "kube-system" namespace has status "Ready":"True"
	I0911 04:00:57.680291    2375 pod_ready.go:81] duration metric: took 3.791625ms waiting for pod "etcd-ingress-addon-legacy-937000" in "kube-system" namespace to be "Ready" ...
	I0911 04:00:57.680297    2375 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-937000" in "kube-system" namespace to be "Ready" ...
	I0911 04:00:57.683468    2375 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-937000" in "kube-system" namespace has status "Ready":"True"
	I0911 04:00:57.683476    2375 pod_ready.go:81] duration metric: took 3.17425ms waiting for pod "kube-apiserver-ingress-addon-legacy-937000" in "kube-system" namespace to be "Ready" ...
	I0911 04:00:57.683485    2375 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-937000" in "kube-system" namespace to be "Ready" ...
	I0911 04:00:57.696368    2375 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-937000" in "kube-system" namespace has status "Ready":"True"
	I0911 04:00:57.696378    2375 pod_ready.go:81] duration metric: took 12.885291ms waiting for pod "kube-controller-manager-ingress-addon-legacy-937000" in "kube-system" namespace to be "Ready" ...
	I0911 04:00:57.696384    2375 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hzz2h" in "kube-system" namespace to be "Ready" ...
	I0911 04:00:57.872195    2375 request.go:629] Waited for 173.420959ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-937000
	I0911 04:00:57.879122    2375 pod_ready.go:92] pod "kube-proxy-hzz2h" in "kube-system" namespace has status "Ready":"True"
	I0911 04:00:57.879137    2375 pod_ready.go:81] duration metric: took 182.752375ms waiting for pod "kube-proxy-hzz2h" in "kube-system" namespace to be "Ready" ...
	I0911 04:00:57.879146    2375 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-937000" in "kube-system" namespace to be "Ready" ...
	I0911 04:00:58.072187    2375 request.go:629] Waited for 192.9605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-937000
	I0911 04:00:58.272153    2375 request.go:629] Waited for 192.476125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-937000
	I0911 04:00:58.278117    2375 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-937000" in "kube-system" namespace has status "Ready":"True"
	I0911 04:00:58.278140    2375 pod_ready.go:81] duration metric: took 398.996042ms waiting for pod "kube-scheduler-ingress-addon-legacy-937000" in "kube-system" namespace to be "Ready" ...
	I0911 04:00:58.278153    2375 pod_ready.go:38] duration metric: took 35.618371542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 04:00:58.278191    2375 api_server.go:52] waiting for apiserver process to appear ...
	I0911 04:00:58.278474    2375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 04:00:58.292813    2375 api_server.go:72] duration metric: took 35.700819958s to wait for apiserver process to appear ...
	I0911 04:00:58.292836    2375 api_server.go:88] waiting for apiserver healthz status ...
	I0911 04:00:58.292854    2375 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I0911 04:00:58.301029    2375 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I0911 04:00:58.302481    2375 api_server.go:141] control plane version: v1.18.20
	I0911 04:00:58.302497    2375 api_server.go:131] duration metric: took 9.654792ms to wait for apiserver health ...
	I0911 04:00:58.302504    2375 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 04:00:58.472157    2375 request.go:629] Waited for 169.58125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0911 04:00:58.486915    2375 system_pods.go:59] 7 kube-system pods found
	I0911 04:00:58.486948    2375 system_pods.go:61] "coredns-66bff467f8-mq2jc" [d522f8ea-e7bb-40cc-a788-36fad8c72593] Running
	I0911 04:00:58.486958    2375 system_pods.go:61] "etcd-ingress-addon-legacy-937000" [52e3c5c1-e357-4815-9dcd-dd64a2ad59c3] Running
	I0911 04:00:58.486968    2375 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-937000" [f833fe46-33c9-49f3-80eb-3e6856fb79fe] Running
	I0911 04:00:58.486979    2375 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-937000" [734b632c-c2dd-40ac-8e9c-c8540949d63a] Running
	I0911 04:00:58.486992    2375 system_pods.go:61] "kube-proxy-hzz2h" [3fb5dc40-f1d1-414f-8a94-157be8f27925] Running
	I0911 04:00:58.487006    2375 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-937000" [4af5c42d-e083-4c13-9de3-8c69a02ccb10] Running
	I0911 04:00:58.487021    2375 system_pods.go:61] "storage-provisioner" [2eabb3b9-81d8-4129-ab51-0278b156c620] Running
	I0911 04:00:58.487029    2375 system_pods.go:74] duration metric: took 184.523083ms to wait for pod list to return data ...
	I0911 04:00:58.487042    2375 default_sa.go:34] waiting for default service account to be created ...
	I0911 04:00:58.672123    2375 request.go:629] Waited for 184.979916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I0911 04:00:58.677833    2375 default_sa.go:45] found service account: "default"
	I0911 04:00:58.677866    2375 default_sa.go:55] duration metric: took 190.818875ms for default service account to be created ...
	I0911 04:00:58.677882    2375 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 04:00:58.870167    2375 request.go:629] Waited for 192.139625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0911 04:00:58.894910    2375 system_pods.go:86] 7 kube-system pods found
	I0911 04:00:58.894944    2375 system_pods.go:89] "coredns-66bff467f8-mq2jc" [d522f8ea-e7bb-40cc-a788-36fad8c72593] Running
	I0911 04:00:58.894953    2375 system_pods.go:89] "etcd-ingress-addon-legacy-937000" [52e3c5c1-e357-4815-9dcd-dd64a2ad59c3] Running
	I0911 04:00:58.894960    2375 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-937000" [f833fe46-33c9-49f3-80eb-3e6856fb79fe] Running
	I0911 04:00:58.894991    2375 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-937000" [734b632c-c2dd-40ac-8e9c-c8540949d63a] Running
	I0911 04:00:58.895002    2375 system_pods.go:89] "kube-proxy-hzz2h" [3fb5dc40-f1d1-414f-8a94-157be8f27925] Running
	I0911 04:00:58.895010    2375 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-937000" [4af5c42d-e083-4c13-9de3-8c69a02ccb10] Running
	I0911 04:00:58.895016    2375 system_pods.go:89] "storage-provisioner" [2eabb3b9-81d8-4129-ab51-0278b156c620] Running
	I0911 04:00:58.895025    2375 system_pods.go:126] duration metric: took 217.137792ms to wait for k8s-apps to be running ...
	I0911 04:00:58.895038    2375 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 04:00:58.895209    2375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 04:00:58.908662    2375 system_svc.go:56] duration metric: took 13.614875ms WaitForService to wait for kubelet.
	I0911 04:00:58.908679    2375 kubeadm.go:581] duration metric: took 36.316709542s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 04:00:58.908700    2375 node_conditions.go:102] verifying NodePressure condition ...
	I0911 04:00:59.072212    2375 request.go:629] Waited for 163.388667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I0911 04:00:59.078419    2375 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0911 04:00:59.078455    2375 node_conditions.go:123] node cpu capacity is 2
	I0911 04:00:59.078485    2375 node_conditions.go:105] duration metric: took 169.780875ms to run NodePressure ...
	I0911 04:00:59.078505    2375 start.go:228] waiting for startup goroutines ...
	I0911 04:00:59.078516    2375 start.go:233] waiting for cluster config update ...
	I0911 04:00:59.078543    2375 start.go:242] writing updated cluster config ...
	I0911 04:00:59.079505    2375 ssh_runner.go:195] Run: rm -f paused
	I0911 04:00:59.136238    2375 start.go:600] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I0911 04:00:59.139542    2375 out.go:177] 
	W0911 04:00:59.142514    2375 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0911 04:00:59.146408    2375 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0911 04:00:59.152474    2375 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-937000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-09-11 10:59:42 UTC, ends at Mon 2023-09-11 11:02:08 UTC. --
	Sep 11 11:01:43 ingress-addon-legacy-937000 dockerd[1070]: time="2023-09-11T11:01:43.997665012Z" level=info msg="shim disconnected" id=d1d770ace6cac0b9a92c8dfaa99317eaac70df329cb5cc7c838f8b8dbc43cb2d namespace=moby
	Sep 11 11:01:43 ingress-addon-legacy-937000 dockerd[1070]: time="2023-09-11T11:01:43.997688387Z" level=warning msg="cleaning up after shim disconnected" id=d1d770ace6cac0b9a92c8dfaa99317eaac70df329cb5cc7c838f8b8dbc43cb2d namespace=moby
	Sep 11 11:01:43 ingress-addon-legacy-937000 dockerd[1070]: time="2023-09-11T11:01:43.997692679Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 11 11:01:57 ingress-addon-legacy-937000 dockerd[1070]: time="2023-09-11T11:01:57.009914526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 11 11:01:57 ingress-addon-legacy-937000 dockerd[1070]: time="2023-09-11T11:01:57.010010228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 11:01:57 ingress-addon-legacy-937000 dockerd[1070]: time="2023-09-11T11:01:57.010038143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 11 11:01:57 ingress-addon-legacy-937000 dockerd[1070]: time="2023-09-11T11:01:57.010057475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 11 11:01:57 ingress-addon-legacy-937000 dockerd[1070]: time="2023-09-11T11:01:57.059865862Z" level=info msg="shim disconnected" id=907a1f1a2125bdbc9ff1da0617ceb7642b56597bbe70f5c3c4285049bf95726c namespace=moby
	Sep 11 11:01:57 ingress-addon-legacy-937000 dockerd[1063]: time="2023-09-11T11:01:57.059979396Z" level=info msg="ignoring event" container=907a1f1a2125bdbc9ff1da0617ceb7642b56597bbe70f5c3c4285049bf95726c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 11 11:01:57 ingress-addon-legacy-937000 dockerd[1070]: time="2023-09-11T11:01:57.060286459Z" level=warning msg="cleaning up after shim disconnected" id=907a1f1a2125bdbc9ff1da0617ceb7642b56597bbe70f5c3c4285049bf95726c namespace=moby
	Sep 11 11:01:57 ingress-addon-legacy-937000 dockerd[1070]: time="2023-09-11T11:01:57.060297667Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 11 11:01:57 ingress-addon-legacy-937000 dockerd[1063]: time="2023-09-11T11:01:57.971914125Z" level=info msg="ignoring event" container=5e8e47c23a8107ea59ae3f63a33b8187f07f09e9bb426682a0e931e43d8ce2c0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 11 11:01:57 ingress-addon-legacy-937000 dockerd[1070]: time="2023-09-11T11:01:57.972143859Z" level=info msg="shim disconnected" id=5e8e47c23a8107ea59ae3f63a33b8187f07f09e9bb426682a0e931e43d8ce2c0 namespace=moby
	Sep 11 11:01:57 ingress-addon-legacy-937000 dockerd[1070]: time="2023-09-11T11:01:57.972200272Z" level=warning msg="cleaning up after shim disconnected" id=5e8e47c23a8107ea59ae3f63a33b8187f07f09e9bb426682a0e931e43d8ce2c0 namespace=moby
	Sep 11 11:01:57 ingress-addon-legacy-937000 dockerd[1070]: time="2023-09-11T11:01:57.972210063Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 11 11:02:03 ingress-addon-legacy-937000 dockerd[1063]: time="2023-09-11T11:02:03.451384098Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=22abf78e178e694e4b79274827cd3b7ce7cb4f00c6a860a0796e00e490d5cc23
	Sep 11 11:02:03 ingress-addon-legacy-937000 dockerd[1063]: time="2023-09-11T11:02:03.460954706Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=22abf78e178e694e4b79274827cd3b7ce7cb4f00c6a860a0796e00e490d5cc23
	Sep 11 11:02:03 ingress-addon-legacy-937000 dockerd[1063]: time="2023-09-11T11:02:03.541962872Z" level=info msg="ignoring event" container=22abf78e178e694e4b79274827cd3b7ce7cb4f00c6a860a0796e00e490d5cc23 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 11 11:02:03 ingress-addon-legacy-937000 dockerd[1070]: time="2023-09-11T11:02:03.542109406Z" level=info msg="shim disconnected" id=22abf78e178e694e4b79274827cd3b7ce7cb4f00c6a860a0796e00e490d5cc23 namespace=moby
	Sep 11 11:02:03 ingress-addon-legacy-937000 dockerd[1070]: time="2023-09-11T11:02:03.542183985Z" level=warning msg="cleaning up after shim disconnected" id=22abf78e178e694e4b79274827cd3b7ce7cb4f00c6a860a0796e00e490d5cc23 namespace=moby
	Sep 11 11:02:03 ingress-addon-legacy-937000 dockerd[1070]: time="2023-09-11T11:02:03.542193235Z" level=info msg="cleaning up dead shim" namespace=moby
	Sep 11 11:02:03 ingress-addon-legacy-937000 dockerd[1063]: time="2023-09-11T11:02:03.578426236Z" level=info msg="ignoring event" container=5b47d400c8bf735fd7b8fe8af22a004cdd8453106cf76d67eb57419cfa82d0d2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 11 11:02:03 ingress-addon-legacy-937000 dockerd[1070]: time="2023-09-11T11:02:03.579090533Z" level=info msg="shim disconnected" id=5b47d400c8bf735fd7b8fe8af22a004cdd8453106cf76d67eb57419cfa82d0d2 namespace=moby
	Sep 11 11:02:03 ingress-addon-legacy-937000 dockerd[1070]: time="2023-09-11T11:02:03.579124907Z" level=warning msg="cleaning up after shim disconnected" id=5b47d400c8bf735fd7b8fe8af22a004cdd8453106cf76d67eb57419cfa82d0d2 namespace=moby
	Sep 11 11:02:03 ingress-addon-legacy-937000 dockerd[1070]: time="2023-09-11T11:02:03.579130115Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	907a1f1a2125b       a39a074194753                                                                                                      12 seconds ago       Exited              hello-world-app           2                   4b457e152f0a8
	0491e06cc2361       nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                                      33 seconds ago       Running             nginx                     0                   2a90ea2edb3ea
	22abf78e178e6       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   57 seconds ago       Exited              controller                0                   5b47d400c8bf7
	42e0019e617b2       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              patch                     0                   30319f557238c
	2336ac3f1fcb1       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   3e31a1212c2e2
	624c18ecc15aa       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   76de6b026ad6e
	3141d43021a96       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   374a370909005
	b333e3fb6e4a8       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   f743f1b27cd3a
	7a22dc145cfc5       095f37015706d                                                                                                      2 minutes ago        Running             kube-scheduler            0                   ea6ad6b9cba17
	c6835b5ab8a09       68a4fac29a865                                                                                                      2 minutes ago        Running             kube-controller-manager   0                   f979a20f81929
	5948fdfe5ad89       2694cf044d665                                                                                                      2 minutes ago        Running             kube-apiserver            0                   025ded886312c
	d2d0a18e21c5a       ab707b0a0ea33                                                                                                      2 minutes ago        Running             etcd                      0                   3bf47125ca5fc
	
	* 
	* ==> coredns [3141d43021a9] <==
	* [INFO] 172.17.0.1:31584 - 32697 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000030667s
	[INFO] 172.17.0.1:31584 - 30360 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029542s
	[INFO] 172.17.0.1:31584 - 4301 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029459s
	[INFO] 172.17.0.1:31584 - 10699 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000039667s
	[INFO] 172.17.0.1:10020 - 12528 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000019459s
	[INFO] 172.17.0.1:10020 - 11540 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000010875s
	[INFO] 172.17.0.1:10020 - 7766 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000009583s
	[INFO] 172.17.0.1:10020 - 63447 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000011917s
	[INFO] 172.17.0.1:10020 - 40210 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011917s
	[INFO] 172.17.0.1:10020 - 39468 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000008584s
	[INFO] 172.17.0.1:10020 - 34729 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000013709s
	[INFO] 172.17.0.1:39602 - 4525 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000011709s
	[INFO] 172.17.0.1:39602 - 59242 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000009167s
	[INFO] 172.17.0.1:39602 - 45976 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000007583s
	[INFO] 172.17.0.1:39602 - 36497 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000007333s
	[INFO] 172.17.0.1:39602 - 56308 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000008125s
	[INFO] 172.17.0.1:39602 - 20587 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000007208s
	[INFO] 172.17.0.1:39602 - 59979 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000007708s
	[INFO] 172.17.0.1:11403 - 61868 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000037709s
	[INFO] 172.17.0.1:11403 - 57129 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000059793s
	[INFO] 172.17.0.1:11403 - 12706 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000009458s
	[INFO] 172.17.0.1:11403 - 65264 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000008s
	[INFO] 172.17.0.1:11403 - 8341 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000008375s
	[INFO] 172.17.0.1:11403 - 19964 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000008125s
	[INFO] 172.17.0.1:11403 - 22617 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00001s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-937000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-937000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=ingress-addon-legacy-937000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T04_00_07_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 11:00:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-937000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 11:02:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 11:01:44 +0000   Mon, 11 Sep 2023 11:00:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 11:01:44 +0000   Mon, 11 Sep 2023 11:00:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 11:01:44 +0000   Mon, 11 Sep 2023 11:00:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 11:01:44 +0000   Mon, 11 Sep 2023 11:00:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-937000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	System Info:
	  Machine ID:                 4260432bd34c4390b652c1c6c98ff142
	  System UUID:                4260432bd34c4390b652c1c6c98ff142
	  Boot ID:                    d29c1883-dccd-4c2f-8c17-4334e70ff2b1
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.5
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-qq7k7                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 coredns-66bff467f8-mq2jc                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     106s
	  kube-system                 etcd-ingress-addon-legacy-937000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	  kube-system                 kube-apiserver-ingress-addon-legacy-937000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-937000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	  kube-system                 kube-proxy-hzz2h                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 kube-scheduler-ingress-addon-legacy-937000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         114s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         105s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 115s  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  115s  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  115s  kubelet     Node ingress-addon-legacy-937000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s  kubelet     Node ingress-addon-legacy-937000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s  kubelet     Node ingress-addon-legacy-937000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                115s  kubelet     Node ingress-addon-legacy-937000 status is now: NodeReady
	  Normal  Starting                 105s  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Sep11 10:59] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.667878] EINJ: EINJ table not found.
	[  +0.512115] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044343] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000800] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.060712] systemd-fstab-generator[474]: Ignoring "noauto" for root device
	[  +0.078292] systemd-fstab-generator[486]: Ignoring "noauto" for root device
	[  +0.428092] systemd-fstab-generator[705]: Ignoring "noauto" for root device
	[  +0.175451] systemd-fstab-generator[740]: Ignoring "noauto" for root device
	[  +0.065927] systemd-fstab-generator[751]: Ignoring "noauto" for root device
	[  +0.086821] systemd-fstab-generator[764]: Ignoring "noauto" for root device
	[  +1.144466] kauditd_printk_skb: 17 callbacks suppressed
	[  +3.160442] systemd-fstab-generator[1056]: Ignoring "noauto" for root device
	[  +4.155016] systemd-fstab-generator[1526]: Ignoring "noauto" for root device
	[Sep11 11:00] kauditd_printk_skb: 68 callbacks suppressed
	[  +0.077790] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +5.145093] systemd-fstab-generator[2598]: Ignoring "noauto" for root device
	[ +16.002152] kauditd_printk_skb: 7 callbacks suppressed
	[ +34.665529] kauditd_printk_skb: 9 callbacks suppressed
	[Sep11 11:01] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	
	* 
	* ==> etcd [d2d0a18e21c5] <==
	* raft2023/09/11 11:00:02 INFO: ed054832bd1917e1 became follower at term 0
	raft2023/09/11 11:00:02 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/09/11 11:00:02 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/09/11 11:00:02 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-09-11 11:00:02.867709 W | auth: simple token is not cryptographically signed
	2023-09-11 11:00:02.868416 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-09-11 11:00:02.869032 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/09/11 11:00:02 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-09-11 11:00:02.869483 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	2023-09-11 11:00:02.870284 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-11 11:00:02.870447 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-11 11:00:02.870492 I | embed: listening for peers on 192.168.105.6:2380
	raft2023/09/11 11:00:03 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/09/11 11:00:03 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/09/11 11:00:03 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/09/11 11:00:03 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/09/11 11:00:03 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-09-11 11:00:03.467036 I | etcdserver: published {Name:ingress-addon-legacy-937000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-09-11 11:00:03.467142 I | embed: ready to serve client requests
	2023-09-11 11:00:03.468044 I | embed: serving client requests on 192.168.105.6:2379
	2023-09-11 11:00:03.468132 I | etcdserver: setting up the initial cluster version to 3.4
	2023-09-11 11:00:03.468430 I | embed: ready to serve client requests
	2023-09-11 11:00:03.468960 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-11 11:00:03.470879 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-09-11 11:00:03.470920 I | etcdserver/api: enabled capabilities for version 3.4
	
	* 
	* ==> kernel <==
	*  11:02:08 up 2 min,  0 users,  load average: 0.40, 0.13, 0.04
	Linux ingress-addon-legacy-937000 5.10.57 #1 SMP PREEMPT Thu Aug 24 12:01:08 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [5948fdfe5ad8] <==
	* I0911 11:00:04.962165       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	E0911 11:00:04.978786       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.105.6, ResourceVersion: 0, AdditionalErrorMsg: 
	I0911 11:00:05.047673       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0911 11:00:05.047691       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0911 11:00:05.048098       1 cache.go:39] Caches are synced for autoregister controller
	I0911 11:00:05.048249       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0911 11:00:05.049469       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0911 11:00:05.947245       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0911 11:00:05.947309       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0911 11:00:05.960221       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0911 11:00:05.967828       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0911 11:00:05.967859       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0911 11:00:06.125075       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0911 11:00:06.139077       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0911 11:00:06.219736       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I0911 11:00:06.220194       1 controller.go:609] quota admission added evaluator for: endpoints
	I0911 11:00:06.221674       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0911 11:00:07.242365       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0911 11:00:07.426108       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0911 11:00:07.662002       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0911 11:00:13.877873       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0911 11:00:22.705505       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0911 11:00:22.707146       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0911 11:00:59.490404       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0911 11:01:29.749653       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [c6835b5ab8a0] <==
	* I0911 11:00:22.722657       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"d1f0a23b-dac7-4b4f-b825-e3835e4d178d", APIVersion:"apps/v1", ResourceVersion:"316", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-rkbz2
	E0911 11:00:22.726212       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"dd96b9dc-bdf9-4081-af2a-ef7cae2767dc", ResourceVersion:"208", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63830026807, loc:(*time.Location)(0x6307ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001879580), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0x40018795a0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40018795c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(n
il), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x40018b8100), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSou
rce)(0x40018795e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.Pr
ojectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001879600), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolum
eSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001879640)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList
(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40015918b0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40018b04a8), Acti
veDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400033dab0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPoli
cy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000f4d0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40018b04f8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0911 11:00:22.727127       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"d1f0a23b-dac7-4b4f-b825-e3835e4d178d", APIVersion:"apps/v1", ResourceVersion:"316", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-mq2jc
	I0911 11:00:22.773343       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"868b1d33-bb8a-4146-a15f-b159ad8c0852", APIVersion:"apps/v1", ResourceVersion:"339", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I0911 11:00:22.787430       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"d1f0a23b-dac7-4b4f-b825-e3835e4d178d", APIVersion:"apps/v1", ResourceVersion:"341", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-rkbz2
	I0911 11:00:22.855379       1 shared_informer.go:230] Caches are synced for PV protection 
	I0911 11:00:22.860994       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0911 11:00:22.867803       1 shared_informer.go:230] Caches are synced for expand 
	I0911 11:00:22.968237       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0911 11:00:22.989453       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I0911 11:00:23.090662       1 shared_informer.go:230] Caches are synced for attach detach 
	I0911 11:00:23.200923       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0911 11:00:23.240871       1 shared_informer.go:230] Caches are synced for resource quota 
	I0911 11:00:23.251026       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0911 11:00:23.251039       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0911 11:00:23.292508       1 shared_informer.go:230] Caches are synced for resource quota 
	I0911 11:00:59.488005       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"6ab11faa-b94a-4939-8565-6595198512a3", APIVersion:"apps/v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0911 11:00:59.497260       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"717a01c0-1b1d-451f-9ecb-af15c9cec33c", APIVersion:"apps/v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-6zsz8
	I0911 11:00:59.497398       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"452ce092-96a2-43c4-bbb4-cf8501cd3ef5", APIVersion:"batch/v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-4wm4q
	I0911 11:00:59.516511       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"f8794fd2-bb5b-4dfa-b66f-4e1ed98de85f", APIVersion:"batch/v1", ResourceVersion:"472", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-rf2tb
	I0911 11:01:03.431545       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"f8794fd2-bb5b-4dfa-b66f-4e1ed98de85f", APIVersion:"batch/v1", ResourceVersion:"481", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0911 11:01:03.453663       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"452ce092-96a2-43c4-bbb4-cf8501cd3ef5", APIVersion:"batch/v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0911 11:01:41.029049       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"b35a66ee-065a-4af7-8712-822d0d6de8f5", APIVersion:"apps/v1", ResourceVersion:"598", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0911 11:01:41.036022       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"69cdfe3f-5261-458f-a932-427500f54ddd", APIVersion:"apps/v1", ResourceVersion:"599", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-qq7k7
	E0911 11:02:06.183207       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-67nmc" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [b333e3fb6e4a] <==
	* W0911 11:00:23.274859       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0911 11:00:23.278981       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I0911 11:00:23.278998       1 server_others.go:186] Using iptables Proxier.
	I0911 11:00:23.279130       1 server.go:583] Version: v1.18.20
	I0911 11:00:23.280405       1 config.go:133] Starting endpoints config controller
	I0911 11:00:23.280419       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0911 11:00:23.280574       1 config.go:315] Starting service config controller
	I0911 11:00:23.280577       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0911 11:00:23.384204       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0911 11:00:23.384228       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [7a22dc145cfc] <==
	* W0911 11:00:04.983552       1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0911 11:00:04.983580       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0911 11:00:04.983597       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0911 11:00:04.994316       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0911 11:00:04.994329       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0911 11:00:04.995361       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0911 11:00:04.995458       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0911 11:00:04.995480       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0911 11:00:04.995579       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0911 11:00:04.996287       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0911 11:00:04.997061       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0911 11:00:04.997245       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0911 11:00:04.997084       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0911 11:00:04.997106       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0911 11:00:04.997126       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0911 11:00:04.997146       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0911 11:00:04.997166       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0911 11:00:04.997184       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0911 11:00:04.997286       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0911 11:00:04.997322       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0911 11:00:04.997371       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0911 11:00:05.818254       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0911 11:00:05.886582       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0911 11:00:06.083129       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0911 11:00:06.197171       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-11 10:59:42 UTC, ends at Mon 2023-09-11 11:02:08 UTC. --
	Sep 11 11:01:48 ingress-addon-legacy-937000 kubelet[2604]: I0911 11:01:48.936849    2604 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 7eb809bf37630ab16c1b29782f2cd431b47bb873645f9b26ef987c3c0a7a8c7c
	Sep 11 11:01:48 ingress-addon-legacy-937000 kubelet[2604]: E0911 11:01:48.937697    2604 pod_workers.go:191] Error syncing pod 2876e9d5-2115-407d-b478-6048467bcd14 ("kube-ingress-dns-minikube_kube-system(2876e9d5-2115-407d-b478-6048467bcd14)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(2876e9d5-2115-407d-b478-6048467bcd14)"
	Sep 11 11:01:56 ingress-addon-legacy-937000 kubelet[2604]: I0911 11:01:56.510304    2604 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-wsj9w" (UniqueName: "kubernetes.io/secret/2876e9d5-2115-407d-b478-6048467bcd14-minikube-ingress-dns-token-wsj9w") pod "2876e9d5-2115-407d-b478-6048467bcd14" (UID: "2876e9d5-2115-407d-b478-6048467bcd14")
	Sep 11 11:01:56 ingress-addon-legacy-937000 kubelet[2604]: I0911 11:01:56.512626    2604 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2876e9d5-2115-407d-b478-6048467bcd14-minikube-ingress-dns-token-wsj9w" (OuterVolumeSpecName: "minikube-ingress-dns-token-wsj9w") pod "2876e9d5-2115-407d-b478-6048467bcd14" (UID: "2876e9d5-2115-407d-b478-6048467bcd14"). InnerVolumeSpecName "minikube-ingress-dns-token-wsj9w". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 11 11:01:56 ingress-addon-legacy-937000 kubelet[2604]: I0911 11:01:56.615201    2604 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-wsj9w" (UniqueName: "kubernetes.io/secret/2876e9d5-2115-407d-b478-6048467bcd14-minikube-ingress-dns-token-wsj9w") on node "ingress-addon-legacy-937000" DevicePath ""
	Sep 11 11:01:56 ingress-addon-legacy-937000 kubelet[2604]: I0911 11:01:56.934605    2604 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d1d770ace6cac0b9a92c8dfaa99317eaac70df329cb5cc7c838f8b8dbc43cb2d
	Sep 11 11:01:57 ingress-addon-legacy-937000 kubelet[2604]: W0911 11:01:57.072534    2604 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pod4e1a590f-9aff-4f8f-bf45-b7b3d13b162f/907a1f1a2125bdbc9ff1da0617ceb7642b56597bbe70f5c3c4285049bf95726c": none of the resources are being tracked.
	Sep 11 11:01:57 ingress-addon-legacy-937000 kubelet[2604]: W0911 11:01:57.113279    2604 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-qq7k7 through plugin: invalid network status for
	Sep 11 11:01:57 ingress-addon-legacy-937000 kubelet[2604]: I0911 11:01:57.115241    2604 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: d1d770ace6cac0b9a92c8dfaa99317eaac70df329cb5cc7c838f8b8dbc43cb2d
	Sep 11 11:01:57 ingress-addon-legacy-937000 kubelet[2604]: I0911 11:01:57.115370    2604 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 907a1f1a2125bdbc9ff1da0617ceb7642b56597bbe70f5c3c4285049bf95726c
	Sep 11 11:01:57 ingress-addon-legacy-937000 kubelet[2604]: E0911 11:01:57.115474    2604 pod_workers.go:191] Error syncing pod 4e1a590f-9aff-4f8f-bf45-b7b3d13b162f ("hello-world-app-5f5d8b66bb-qq7k7_default(4e1a590f-9aff-4f8f-bf45-b7b3d13b162f)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-qq7k7_default(4e1a590f-9aff-4f8f-bf45-b7b3d13b162f)"
	Sep 11 11:01:58 ingress-addon-legacy-937000 kubelet[2604]: W0911 11:01:58.125383    2604 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-qq7k7 through plugin: invalid network status for
	Sep 11 11:01:58 ingress-addon-legacy-937000 kubelet[2604]: I0911 11:01:58.139579    2604 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 7eb809bf37630ab16c1b29782f2cd431b47bb873645f9b26ef987c3c0a7a8c7c
	Sep 11 11:02:01 ingress-addon-legacy-937000 kubelet[2604]: E0911 11:02:01.444629    2604 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-6zsz8.1783d337cd5608f8", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-6zsz8", UID:"5784f0e0-2e8b-4d54-a35a-25610e2c7733", APIVersion:"v1", ResourceVersion:"464", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-937000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc137daca5a68aef8, ext:114039983850, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc137daca5a68aef8, ext:114039983850, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-6zsz8.1783d337cd5608f8" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 11 11:02:01 ingress-addon-legacy-937000 kubelet[2604]: E0911 11:02:01.461177    2604 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-6zsz8.1783d337cd5608f8", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-6zsz8", UID:"5784f0e0-2e8b-4d54-a35a-25610e2c7733", APIVersion:"v1", ResourceVersion:"464", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-937000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc137daca5a68aef8, ext:114039983850, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc137daca5af4db72, ext:114049170275, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-6zsz8.1783d337cd5608f8" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 11 11:02:04 ingress-addon-legacy-937000 kubelet[2604]: W0911 11:02:04.242814    2604 pod_container_deletor.go:77] Container "5b47d400c8bf735fd7b8fe8af22a004cdd8453106cf76d67eb57419cfa82d0d2" not found in pod's containers
	Sep 11 11:02:05 ingress-addon-legacy-937000 kubelet[2604]: I0911 11:02:05.599896    2604 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/5784f0e0-2e8b-4d54-a35a-25610e2c7733-webhook-cert") pod "5784f0e0-2e8b-4d54-a35a-25610e2c7733" (UID: "5784f0e0-2e8b-4d54-a35a-25610e2c7733")
	Sep 11 11:02:05 ingress-addon-legacy-937000 kubelet[2604]: I0911 11:02:05.600716    2604 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-j7lkw" (UniqueName: "kubernetes.io/secret/5784f0e0-2e8b-4d54-a35a-25610e2c7733-ingress-nginx-token-j7lkw") pod "5784f0e0-2e8b-4d54-a35a-25610e2c7733" (UID: "5784f0e0-2e8b-4d54-a35a-25610e2c7733")
	Sep 11 11:02:05 ingress-addon-legacy-937000 kubelet[2604]: I0911 11:02:05.609426    2604 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5784f0e0-2e8b-4d54-a35a-25610e2c7733-ingress-nginx-token-j7lkw" (OuterVolumeSpecName: "ingress-nginx-token-j7lkw") pod "5784f0e0-2e8b-4d54-a35a-25610e2c7733" (UID: "5784f0e0-2e8b-4d54-a35a-25610e2c7733"). InnerVolumeSpecName "ingress-nginx-token-j7lkw". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 11 11:02:05 ingress-addon-legacy-937000 kubelet[2604]: I0911 11:02:05.610147    2604 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5784f0e0-2e8b-4d54-a35a-25610e2c7733-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "5784f0e0-2e8b-4d54-a35a-25610e2c7733" (UID: "5784f0e0-2e8b-4d54-a35a-25610e2c7733"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 11 11:02:05 ingress-addon-legacy-937000 kubelet[2604]: I0911 11:02:05.701304    2604 reconciler.go:319] Volume detached for volume "ingress-nginx-token-j7lkw" (UniqueName: "kubernetes.io/secret/5784f0e0-2e8b-4d54-a35a-25610e2c7733-ingress-nginx-token-j7lkw") on node "ingress-addon-legacy-937000" DevicePath ""
	Sep 11 11:02:05 ingress-addon-legacy-937000 kubelet[2604]: I0911 11:02:05.701399    2604 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/5784f0e0-2e8b-4d54-a35a-25610e2c7733-webhook-cert") on node "ingress-addon-legacy-937000" DevicePath ""
	Sep 11 11:02:05 ingress-addon-legacy-937000 kubelet[2604]: W0911 11:02:05.951269    2604 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/5784f0e0-2e8b-4d54-a35a-25610e2c7733/volumes" does not exist
	Sep 11 11:02:07 ingress-addon-legacy-937000 kubelet[2604]: I0911 11:02:07.931698    2604 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 907a1f1a2125bdbc9ff1da0617ceb7642b56597bbe70f5c3c4285049bf95726c
	Sep 11 11:02:07 ingress-addon-legacy-937000 kubelet[2604]: E0911 11:02:07.933863    2604 pod_workers.go:191] Error syncing pod 4e1a590f-9aff-4f8f-bf45-b7b3d13b162f ("hello-world-app-5f5d8b66bb-qq7k7_default(4e1a590f-9aff-4f8f-bf45-b7b3d13b162f)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-qq7k7_default(4e1a590f-9aff-4f8f-bf45-b7b3d13b162f)"
	
	* 
	* ==> storage-provisioner [624c18ecc15a] <==
	* I0911 11:00:24.833453       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0911 11:00:24.837316       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0911 11:00:24.837490       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0911 11:00:24.840392       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0911 11:00:24.840412       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cb62eb66-70e5-4202-8435-17e8ae526b26", APIVersion:"v1", ResourceVersion:"382", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-937000_b7d5cd1a-a2e0-4669-916b-42d652b85425 became leader
	I0911 11:00:24.842018       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-937000_b7d5cd1a-a2e0-4669-916b-42d652b85425!
	I0911 11:00:24.942748       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-937000_b7d5cd1a-a2e0-4669-916b-42d652b85425!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-937000 -n ingress-addon-legacy-937000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-937000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (57.05s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-362000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-362000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.909569333s)

                                                
                                                
-- stdout --
	* [mount-start-1-362000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-362000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-362000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-362000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-362000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-362000 -n mount-start-1-362000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-362000 -n mount-start-1-362000: exit status 7 (69.409417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-362000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.98s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-479000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-479000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (10.033803083s)

                                                
                                                
-- stdout --
	* [multinode-479000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-479000 in cluster multinode-479000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-479000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:04:55.085761    2710 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:04:55.085869    2710 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:04:55.085872    2710 out.go:309] Setting ErrFile to fd 2...
	I0911 04:04:55.085874    2710 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:04:55.085991    2710 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:04:55.086963    2710 out.go:303] Setting JSON to false
	I0911 04:04:55.102220    2710 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2069,"bootTime":1694428226,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:04:55.102271    2710 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:04:55.110666    2710 out.go:177] * [multinode-479000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:04:55.112183    2710 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:04:55.112253    2710 notify.go:220] Checking for updates...
	I0911 04:04:55.116560    2710 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:04:55.120584    2710 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:04:55.123550    2710 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:04:55.126585    2710 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:04:55.129610    2710 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:04:55.131077    2710 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:04:55.135510    2710 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:04:55.142447    2710 start.go:298] selected driver: qemu2
	I0911 04:04:55.142453    2710 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:04:55.142461    2710 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:04:55.144411    2710 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:04:55.147552    2710 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:04:55.150593    2710 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:04:55.150611    2710 cni.go:84] Creating CNI manager for ""
	I0911 04:04:55.150615    2710 cni.go:136] 0 nodes found, recommending kindnet
	I0911 04:04:55.150619    2710 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0911 04:04:55.150627    2710 start_flags.go:321] config:
	{Name:multinode-479000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-479000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAge
ntPID:0 AutoPauseInterval:1m0s}
	I0911 04:04:55.155768    2710 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:04:55.162554    2710 out.go:177] * Starting control plane node multinode-479000 in cluster multinode-479000
	I0911 04:04:55.166609    2710 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:04:55.166628    2710 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:04:55.166643    2710 cache.go:57] Caching tarball of preloaded images
	I0911 04:04:55.166697    2710 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:04:55.166703    2710 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:04:55.166899    2710 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/multinode-479000/config.json ...
	I0911 04:04:55.166912    2710 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/multinode-479000/config.json: {Name:mk9aa0832c690d528246727718259ae4d307b238 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:04:55.167113    2710 start.go:365] acquiring machines lock for multinode-479000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:04:55.167143    2710 start.go:369] acquired machines lock for "multinode-479000" in 24.208µs
	I0911 04:04:55.167154    2710 start.go:93] Provisioning new machine with config: &{Name:multinode-479000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-479000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:04:55.167190    2710 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:04:55.175629    2710 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:04:55.191762    2710 start.go:159] libmachine.API.Create for "multinode-479000" (driver="qemu2")
	I0911 04:04:55.191789    2710 client.go:168] LocalClient.Create starting
	I0911 04:04:55.191856    2710 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:04:55.191882    2710 main.go:141] libmachine: Decoding PEM data...
	I0911 04:04:55.191899    2710 main.go:141] libmachine: Parsing certificate...
	I0911 04:04:55.191948    2710 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:04:55.191967    2710 main.go:141] libmachine: Decoding PEM data...
	I0911 04:04:55.191980    2710 main.go:141] libmachine: Parsing certificate...
	I0911 04:04:55.192312    2710 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:04:55.340246    2710 main.go:141] libmachine: Creating SSH key...
	I0911 04:04:55.629255    2710 main.go:141] libmachine: Creating Disk image...
	I0911 04:04:55.629264    2710 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:04:55.629452    2710 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/disk.qcow2
	I0911 04:04:55.638604    2710 main.go:141] libmachine: STDOUT: 
	I0911 04:04:55.638631    2710 main.go:141] libmachine: STDERR: 
	I0911 04:04:55.638703    2710 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/disk.qcow2 +20000M
	I0911 04:04:55.645933    2710 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:04:55.645947    2710 main.go:141] libmachine: STDERR: 
	I0911 04:04:55.645961    2710 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/disk.qcow2
	I0911 04:04:55.645970    2710 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:04:55.645999    2710 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:7d:03:79:82:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/disk.qcow2
	I0911 04:04:55.647474    2710 main.go:141] libmachine: STDOUT: 
	I0911 04:04:55.647487    2710 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:04:55.647508    2710 client.go:171] LocalClient.Create took 455.724833ms
	I0911 04:04:57.649628    2710 start.go:128] duration metric: createHost completed in 2.482459084s
	I0911 04:04:57.649682    2710 start.go:83] releasing machines lock for "multinode-479000", held for 2.482590708s
	W0911 04:04:57.649748    2710 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:04:57.657076    2710 out.go:177] * Deleting "multinode-479000" in qemu2 ...
	W0911 04:04:57.677834    2710 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:04:57.677862    2710 start.go:687] Will try again in 5 seconds ...
	I0911 04:05:02.679930    2710 start.go:365] acquiring machines lock for multinode-479000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:05:02.680503    2710 start.go:369] acquired machines lock for "multinode-479000" in 404.25µs
	I0911 04:05:02.680662    2710 start.go:93] Provisioning new machine with config: &{Name:multinode-479000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-479000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:05:02.680944    2710 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:05:02.690605    2710 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:05:02.740197    2710 start.go:159] libmachine.API.Create for "multinode-479000" (driver="qemu2")
	I0911 04:05:02.740244    2710 client.go:168] LocalClient.Create starting
	I0911 04:05:02.740376    2710 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:05:02.740454    2710 main.go:141] libmachine: Decoding PEM data...
	I0911 04:05:02.740475    2710 main.go:141] libmachine: Parsing certificate...
	I0911 04:05:02.740557    2710 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:05:02.740611    2710 main.go:141] libmachine: Decoding PEM data...
	I0911 04:05:02.740629    2710 main.go:141] libmachine: Parsing certificate...
	I0911 04:05:02.741315    2710 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:05:02.881469    2710 main.go:141] libmachine: Creating SSH key...
	I0911 04:05:03.032699    2710 main.go:141] libmachine: Creating Disk image...
	I0911 04:05:03.032708    2710 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:05:03.032862    2710 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/disk.qcow2
	I0911 04:05:03.041461    2710 main.go:141] libmachine: STDOUT: 
	I0911 04:05:03.041475    2710 main.go:141] libmachine: STDERR: 
	I0911 04:05:03.041525    2710 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/disk.qcow2 +20000M
	I0911 04:05:03.048652    2710 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:05:03.048676    2710 main.go:141] libmachine: STDERR: 
	I0911 04:05:03.048690    2710 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/disk.qcow2
	I0911 04:05:03.048695    2710 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:05:03.048737    2710 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:92:7d:d9:bc:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/disk.qcow2
	I0911 04:05:03.050221    2710 main.go:141] libmachine: STDOUT: 
	I0911 04:05:03.050235    2710 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:05:03.050248    2710 client.go:171] LocalClient.Create took 310.006792ms
	I0911 04:05:05.052406    2710 start.go:128] duration metric: createHost completed in 2.371490542s
	I0911 04:05:05.052495    2710 start.go:83] releasing machines lock for "multinode-479000", held for 2.372018125s
	W0911 04:05:05.052989    2710 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-479000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-479000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:05:05.063759    2710 out.go:177] 
	W0911 04:05:05.066729    2710 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:05:05.066754    2710 out.go:239] * 
	* 
	W0911 04:05:05.069395    2710 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:05:05.079675    2710 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-479000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000: exit status 7 (69.857708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-479000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (99.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-479000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-479000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (118.298208ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-479000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-479000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-479000 -- rollout status deployment/busybox: exit status 1 (55.346417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-479000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (54.945708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-479000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.763959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-479000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.242333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-479000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.018208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-479000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.526416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-479000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0911 04:05:15.254985    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.547583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-479000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.950542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-479000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-479000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.911042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-479000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0911 04:06:11.701137    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.crt: no such file or directory
E0911 04:06:11.707437    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.crt: no such file or directory
E0911 04:06:11.719499    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.crt: no such file or directory
E0911 04:06:11.741565    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.crt: no such file or directory
E0911 04:06:11.783772    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.crt: no such file or directory
E0911 04:06:11.864168    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.crt: no such file or directory
E0911 04:06:12.026378    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.crt: no such file or directory
E0911 04:06:12.348483    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.crt: no such file or directory
E0911 04:06:12.990903    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.crt: no such file or directory
E0911 04:06:14.273147    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.crt: no such file or directory
E0911 04:06:16.835312    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.544541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-479000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0911 04:06:21.957405    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.crt: no such file or directory
E0911 04:06:32.199299    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.479ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-479000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (54.323166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-479000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-479000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-479000 -- exec  -- nslookup kubernetes.io: exit status 1 (53.894333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-479000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-479000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-479000 -- exec  -- nslookup kubernetes.default: exit status 1 (53.900917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-479000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-479000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-479000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (54.325042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-479000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000: exit status 7 (29.098375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-479000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (99.24s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-479000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (53.741792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-479000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000: exit status 7 (29.063583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-479000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-479000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-479000 -v 3 --alsologtostderr: exit status 89 (39.567292ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-479000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:06:44.483860    2796 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:06:44.484066    2796 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:06:44.484069    2796 out.go:309] Setting ErrFile to fd 2...
	I0911 04:06:44.484071    2796 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:06:44.484174    2796 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:06:44.484398    2796 mustload.go:65] Loading cluster: multinode-479000
	I0911 04:06:44.484568    2796 config.go:182] Loaded profile config "multinode-479000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:06:44.489428    2796 out.go:177] * The control plane node must be running for this command
	I0911 04:06:44.492372    2796 out.go:177]   To start a cluster, run: "minikube start -p multinode-479000"

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-479000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000: exit status 7 (28.520041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-479000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:155: expected profile "multinode-479000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-479000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-479000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidd
en\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.1\",\"ClusterName\":\"multinode-479000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\
",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPat
h\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000: exit status 7 (28.904833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-479000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-479000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-479000 status --output json --alsologtostderr: exit status 7 (29.7185ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-479000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:06:44.654798    2806 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:06:44.654942    2806 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:06:44.654945    2806 out.go:309] Setting ErrFile to fd 2...
	I0911 04:06:44.654947    2806 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:06:44.655056    2806 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:06:44.655172    2806 out.go:303] Setting JSON to true
	I0911 04:06:44.655185    2806 mustload.go:65] Loading cluster: multinode-479000
	I0911 04:06:44.655235    2806 notify.go:220] Checking for updates...
	I0911 04:06:44.655729    2806 config.go:182] Loaded profile config "multinode-479000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:06:44.655736    2806 status.go:255] checking status of multinode-479000 ...
	I0911 04:06:44.656242    2806 status.go:330] multinode-479000 host status = "Stopped" (err=<nil>)
	I0911 04:06:44.656247    2806 status.go:343] host is not running, skipping remaining checks
	I0911 04:06:44.656249    2806 status.go:257] multinode-479000 status: &{Name:multinode-479000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-479000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000: exit status 7 (28.868666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-479000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-479000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-479000 node stop m03: exit status 85 (46.392958ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-479000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-479000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-479000 status: exit status 7 (28.525833ms)

                                                
                                                
-- stdout --
	multinode-479000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-479000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-479000 status --alsologtostderr: exit status 7 (28.712417ms)

                                                
                                                
-- stdout --
	multinode-479000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:06:44.789043    2814 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:06:44.789193    2814 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:06:44.789196    2814 out.go:309] Setting ErrFile to fd 2...
	I0911 04:06:44.789198    2814 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:06:44.789306    2814 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:06:44.789414    2814 out.go:303] Setting JSON to false
	I0911 04:06:44.789425    2814 mustload.go:65] Loading cluster: multinode-479000
	I0911 04:06:44.789491    2814 notify.go:220] Checking for updates...
	I0911 04:06:44.789603    2814 config.go:182] Loaded profile config "multinode-479000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:06:44.789608    2814 status.go:255] checking status of multinode-479000 ...
	I0911 04:06:44.789786    2814 status.go:330] multinode-479000 host status = "Stopped" (err=<nil>)
	I0911 04:06:44.789789    2814 status.go:343] host is not running, skipping remaining checks
	I0911 04:06:44.789791    2814 status.go:257] multinode-479000 status: &{Name:multinode-479000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-479000 status --alsologtostderr": multinode-479000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000: exit status 7 (28.621167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-479000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-479000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-479000 node start m03 --alsologtostderr: exit status 85 (42.682875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:06:44.845341    2818 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:06:44.845939    2818 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:06:44.845945    2818 out.go:309] Setting ErrFile to fd 2...
	I0911 04:06:44.845948    2818 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:06:44.846133    2818 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:06:44.846500    2818 mustload.go:65] Loading cluster: multinode-479000
	I0911 04:06:44.846665    2818 config.go:182] Loaded profile config "multinode-479000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:06:44.850327    2818 out.go:177] 
	W0911 04:06:44.853242    2818 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0911 04:06:44.853246    2818 out.go:239] * 
	* 
	W0911 04:06:44.854841    2818 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:06:44.857242    2818 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0911 04:06:44.845341    2818 out.go:296] Setting OutFile to fd 1 ...
I0911 04:06:44.845939    2818 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 04:06:44.845945    2818 out.go:309] Setting ErrFile to fd 2...
I0911 04:06:44.845948    2818 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 04:06:44.846133    2818 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
I0911 04:06:44.846500    2818 mustload.go:65] Loading cluster: multinode-479000
I0911 04:06:44.846665    2818 config.go:182] Loaded profile config "multinode-479000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0911 04:06:44.850327    2818 out.go:177] 
W0911 04:06:44.853242    2818 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0911 04:06:44.853246    2818 out.go:239] * 
* 
W0911 04:06:44.854841    2818 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0911 04:06:44.857242    2818 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-479000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-479000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-479000 status: exit status 7 (29.041042ms)

                                                
                                                
-- stdout --
	multinode-479000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-479000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000: exit status 7 (28.670667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-479000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-479000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-479000
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-479000 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-479000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.17395375s)

                                                
                                                
-- stdout --
	* [multinode-479000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-479000 in cluster multinode-479000
	* Restarting existing qemu2 VM for "multinode-479000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-479000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:06:45.035566    2828 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:06:45.035669    2828 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:06:45.035671    2828 out.go:309] Setting ErrFile to fd 2...
	I0911 04:06:45.035674    2828 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:06:45.035786    2828 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:06:45.036718    2828 out.go:303] Setting JSON to false
	I0911 04:06:45.051723    2828 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2179,"bootTime":1694428226,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:06:45.051784    2828 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:06:45.056325    2828 out.go:177] * [multinode-479000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:06:45.062400    2828 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:06:45.066292    2828 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:06:45.062462    2828 notify.go:220] Checking for updates...
	I0911 04:06:45.069318    2828 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:06:45.072350    2828 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:06:45.075237    2828 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:06:45.078310    2828 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:06:45.081591    2828 config.go:182] Loaded profile config "multinode-479000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:06:45.081632    2828 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:06:45.086226    2828 out.go:177] * Using the qemu2 driver based on existing profile
	I0911 04:06:45.093298    2828 start.go:298] selected driver: qemu2
	I0911 04:06:45.093303    2828 start.go:902] validating driver "qemu2" against &{Name:multinode-479000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:multinode-479000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:06:45.093343    2828 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:06:45.095212    2828 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:06:45.095265    2828 cni.go:84] Creating CNI manager for ""
	I0911 04:06:45.095269    2828 cni.go:136] 1 nodes found, recommending kindnet
	I0911 04:06:45.095273    2828 start_flags.go:321] config:
	{Name:multinode-479000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-479000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:06:45.099045    2828 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:06:45.106247    2828 out.go:177] * Starting control plane node multinode-479000 in cluster multinode-479000
	I0911 04:06:45.109227    2828 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:06:45.109263    2828 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:06:45.109272    2828 cache.go:57] Caching tarball of preloaded images
	I0911 04:06:45.109339    2828 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:06:45.109345    2828 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:06:45.109435    2828 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/multinode-479000/config.json ...
	I0911 04:06:45.109729    2828 start.go:365] acquiring machines lock for multinode-479000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:06:45.109760    2828 start.go:369] acquired machines lock for "multinode-479000" in 24.291µs
	I0911 04:06:45.109770    2828 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:06:45.109774    2828 fix.go:54] fixHost starting: 
	I0911 04:06:45.109888    2828 fix.go:102] recreateIfNeeded on multinode-479000: state=Stopped err=<nil>
	W0911 04:06:45.109896    2828 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:06:45.117191    2828 out.go:177] * Restarting existing qemu2 VM for "multinode-479000" ...
	I0911 04:06:45.121393    2828 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:92:7d:d9:bc:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/disk.qcow2
	I0911 04:06:45.123295    2828 main.go:141] libmachine: STDOUT: 
	I0911 04:06:45.123308    2828 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:06:45.123334    2828 fix.go:56] fixHost completed within 13.559708ms
	I0911 04:06:45.123374    2828 start.go:83] releasing machines lock for "multinode-479000", held for 13.610834ms
	W0911 04:06:45.123380    2828 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:06:45.123409    2828 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:06:45.123413    2828 start.go:687] Will try again in 5 seconds ...
	I0911 04:06:50.125389    2828 start.go:365] acquiring machines lock for multinode-479000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:06:50.126021    2828 start.go:369] acquired machines lock for "multinode-479000" in 549.583µs
	I0911 04:06:50.126148    2828 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:06:50.126171    2828 fix.go:54] fixHost starting: 
	I0911 04:06:50.126953    2828 fix.go:102] recreateIfNeeded on multinode-479000: state=Stopped err=<nil>
	W0911 04:06:50.126983    2828 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:06:50.131487    2828 out.go:177] * Restarting existing qemu2 VM for "multinode-479000" ...
	I0911 04:06:50.139545    2828 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:92:7d:d9:bc:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/disk.qcow2
	I0911 04:06:50.148308    2828 main.go:141] libmachine: STDOUT: 
	I0911 04:06:50.148351    2828 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:06:50.148423    2828 fix.go:56] fixHost completed within 22.254166ms
	I0911 04:06:50.148439    2828 start.go:83] releasing machines lock for "multinode-479000", held for 22.397875ms
	W0911 04:06:50.148693    2828 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-479000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-479000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:06:50.155437    2828 out.go:177] 
	W0911 04:06:50.159532    2828 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:06:50.159591    2828 out.go:239] * 
	* 
	W0911 04:06:50.162168    2828 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:06:50.170383    2828 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-479000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-479000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000: exit status 7 (31.750917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-479000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-479000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-479000 node delete m03: exit status 89 (39.696541ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-479000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-479000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-479000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-479000 status --alsologtostderr: exit status 7 (28.780709ms)

                                                
                                                
-- stdout --
	multinode-479000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:06:50.350726    2844 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:06:50.350858    2844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:06:50.350861    2844 out.go:309] Setting ErrFile to fd 2...
	I0911 04:06:50.350863    2844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:06:50.350974    2844 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:06:50.351096    2844 out.go:303] Setting JSON to false
	I0911 04:06:50.351108    2844 mustload.go:65] Loading cluster: multinode-479000
	I0911 04:06:50.351169    2844 notify.go:220] Checking for updates...
	I0911 04:06:50.351301    2844 config.go:182] Loaded profile config "multinode-479000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:06:50.351306    2844 status.go:255] checking status of multinode-479000 ...
	I0911 04:06:50.351490    2844 status.go:330] multinode-479000 host status = "Stopped" (err=<nil>)
	I0911 04:06:50.351494    2844 status.go:343] host is not running, skipping remaining checks
	I0911 04:06:50.351496    2844 status.go:257] multinode-479000 status: &{Name:multinode-479000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-479000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000: exit status 7 (29.331958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-479000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-479000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-479000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-479000 status: exit status 7 (29.552084ms)

                                                
                                                
-- stdout --
	multinode-479000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-479000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-479000 status --alsologtostderr: exit status 7 (28.536458ms)

                                                
                                                
-- stdout --
	multinode-479000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:06:50.499109    2852 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:06:50.499223    2852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:06:50.499225    2852 out.go:309] Setting ErrFile to fd 2...
	I0911 04:06:50.499228    2852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:06:50.499371    2852 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:06:50.499475    2852 out.go:303] Setting JSON to false
	I0911 04:06:50.499493    2852 mustload.go:65] Loading cluster: multinode-479000
	I0911 04:06:50.499534    2852 notify.go:220] Checking for updates...
	I0911 04:06:50.499672    2852 config.go:182] Loaded profile config "multinode-479000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:06:50.499676    2852 status.go:255] checking status of multinode-479000 ...
	I0911 04:06:50.499875    2852 status.go:330] multinode-479000 host status = "Stopped" (err=<nil>)
	I0911 04:06:50.499878    2852 status.go:343] host is not running, skipping remaining checks
	I0911 04:06:50.499880    2852 status.go:257] multinode-479000 status: &{Name:multinode-479000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-479000 status --alsologtostderr": multinode-479000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-479000 status --alsologtostderr": multinode-479000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000: exit status 7 (28.777208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-479000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-479000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
E0911 04:06:52.680988    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.crt: no such file or directory
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-479000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.176689667s)

                                                
                                                
-- stdout --
	* [multinode-479000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-479000 in cluster multinode-479000
	* Restarting existing qemu2 VM for "multinode-479000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-479000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:06:50.556494    2856 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:06:50.556591    2856 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:06:50.556595    2856 out.go:309] Setting ErrFile to fd 2...
	I0911 04:06:50.556598    2856 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:06:50.556713    2856 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:06:50.557685    2856 out.go:303] Setting JSON to false
	I0911 04:06:50.572661    2856 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2184,"bootTime":1694428226,"procs":415,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:06:50.572744    2856 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:06:50.577584    2856 out.go:177] * [multinode-479000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:06:50.584660    2856 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:06:50.588548    2856 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:06:50.584731    2856 notify.go:220] Checking for updates...
	I0911 04:06:50.591588    2856 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:06:50.594563    2856 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:06:50.597519    2856 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:06:50.600584    2856 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:06:50.603798    2856 config.go:182] Loaded profile config "multinode-479000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:06:50.604029    2856 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:06:50.608497    2856 out.go:177] * Using the qemu2 driver based on existing profile
	I0911 04:06:50.615567    2856 start.go:298] selected driver: qemu2
	I0911 04:06:50.615574    2856 start.go:902] validating driver "qemu2" against &{Name:multinode-479000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:multinode-479000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:06:50.615645    2856 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:06:50.617578    2856 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:06:50.617609    2856 cni.go:84] Creating CNI manager for ""
	I0911 04:06:50.617613    2856 cni.go:136] 1 nodes found, recommending kindnet
	I0911 04:06:50.617618    2856 start_flags.go:321] config:
	{Name:multinode-479000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-479000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:06:50.621438    2856 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:06:50.629546    2856 out.go:177] * Starting control plane node multinode-479000 in cluster multinode-479000
	I0911 04:06:50.632434    2856 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:06:50.632457    2856 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:06:50.632467    2856 cache.go:57] Caching tarball of preloaded images
	I0911 04:06:50.632522    2856 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:06:50.632528    2856 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:06:50.632585    2856 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/multinode-479000/config.json ...
	I0911 04:06:50.632879    2856 start.go:365] acquiring machines lock for multinode-479000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:06:50.632906    2856 start.go:369] acquired machines lock for "multinode-479000" in 20.625µs
	I0911 04:06:50.632916    2856 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:06:50.632921    2856 fix.go:54] fixHost starting: 
	I0911 04:06:50.633042    2856 fix.go:102] recreateIfNeeded on multinode-479000: state=Stopped err=<nil>
	W0911 04:06:50.633050    2856 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:06:50.640430    2856 out.go:177] * Restarting existing qemu2 VM for "multinode-479000" ...
	I0911 04:06:50.644611    2856 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:92:7d:d9:bc:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/disk.qcow2
	I0911 04:06:50.646687    2856 main.go:141] libmachine: STDOUT: 
	I0911 04:06:50.646719    2856 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:06:50.646749    2856 fix.go:56] fixHost completed within 13.827625ms
	I0911 04:06:50.646755    2856 start.go:83] releasing machines lock for "multinode-479000", held for 13.844875ms
	W0911 04:06:50.646762    2856 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:06:50.646807    2856 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:06:50.646811    2856 start.go:687] Will try again in 5 seconds ...
	I0911 04:06:55.649058    2856 start.go:365] acquiring machines lock for multinode-479000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:06:55.649477    2856 start.go:369] acquired machines lock for "multinode-479000" in 335.917µs
	I0911 04:06:55.649632    2856 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:06:55.649664    2856 fix.go:54] fixHost starting: 
	I0911 04:06:55.650434    2856 fix.go:102] recreateIfNeeded on multinode-479000: state=Stopped err=<nil>
	W0911 04:06:55.650460    2856 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:06:55.654843    2856 out.go:177] * Restarting existing qemu2 VM for "multinode-479000" ...
	I0911 04:06:55.662027    2856 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:92:7d:d9:bc:b5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/multinode-479000/disk.qcow2
	I0911 04:06:55.671496    2856 main.go:141] libmachine: STDOUT: 
	I0911 04:06:55.671558    2856 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:06:55.671648    2856 fix.go:56] fixHost completed within 21.99375ms
	I0911 04:06:55.671667    2856 start.go:83] releasing machines lock for "multinode-479000", held for 22.169333ms
	W0911 04:06:55.671930    2856 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-479000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-479000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:06:55.680865    2856 out.go:177] 
	W0911 04:06:55.684884    2856 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:06:55.684907    2856 out.go:239] * 
	* 
	W0911 04:06:55.687614    2856 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:06:55.693851    2856 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-479000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000: exit status 7 (68.1505ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-479000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-479000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-479000-m01 --driver=qemu2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-479000-m01 --driver=qemu2 : exit status 80 (9.826838791s)

                                                
                                                
-- stdout --
	* [multinode-479000-m01] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-479000-m01 in cluster multinode-479000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-479000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-479000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-479000-m02 --driver=qemu2 
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-479000-m02 --driver=qemu2 : exit status 80 (9.770033167s)

                                                
                                                
-- stdout --
	* [multinode-479000-m02] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-479000-m02 in cluster multinode-479000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-479000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-479000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-479000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-479000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-479000: exit status 89 (78.006208ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-479000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-479000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-479000 -n multinode-479000: exit status 7 (29.306042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-479000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (19.84s)

                                                
                                    
x
+
TestPreload (10.01s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-785000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-785000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.839300833s)

                                                
                                                
-- stdout --
	* [test-preload-785000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-785000 in cluster test-preload-785000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-785000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:07:15.761228    2910 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:07:15.761353    2910 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:07:15.761356    2910 out.go:309] Setting ErrFile to fd 2...
	I0911 04:07:15.761358    2910 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:07:15.761467    2910 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:07:15.762459    2910 out.go:303] Setting JSON to false
	I0911 04:07:15.777828    2910 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2209,"bootTime":1694428226,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:07:15.777899    2910 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:07:15.783023    2910 out.go:177] * [test-preload-785000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:07:15.790957    2910 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:07:15.794965    2910 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:07:15.791000    2910 notify.go:220] Checking for updates...
	I0911 04:07:15.798911    2910 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:07:15.801987    2910 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:07:15.804967    2910 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:07:15.807999    2910 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:07:15.811311    2910 config.go:182] Loaded profile config "multinode-479000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:07:15.811357    2910 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:07:15.816010    2910 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:07:15.822939    2910 start.go:298] selected driver: qemu2
	I0911 04:07:15.822946    2910 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:07:15.822952    2910 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:07:15.825030    2910 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:07:15.827985    2910 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:07:15.831084    2910 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:07:15.831116    2910 cni.go:84] Creating CNI manager for ""
	I0911 04:07:15.831130    2910 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:07:15.831136    2910 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:07:15.831142    2910 start_flags.go:321] config:
	{Name:test-preload-785000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-785000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock:
SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:07:15.836479    2910 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:07:15.843975    2910 out.go:177] * Starting control plane node test-preload-785000 in cluster test-preload-785000
	I0911 04:07:15.847952    2910 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0911 04:07:15.848019    2910 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/test-preload-785000/config.json ...
	I0911 04:07:15.848035    2910 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/test-preload-785000/config.json: {Name:mk44140bebc636091fe3385c1d875ec57b894832 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:07:15.848030    2910 cache.go:107] acquiring lock: {Name:mkff57bf48135c68d01d4da956cd3bff89d38a34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:07:15.848041    2910 cache.go:107] acquiring lock: {Name:mkf63d3aa46aab91b024d40b49844533fc4daedb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:07:15.848058    2910 cache.go:107] acquiring lock: {Name:mk524182fed2876606092e1bacdc1e8dd2205442 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:07:15.848177    2910 cache.go:107] acquiring lock: {Name:mk401a1e44bbc7f5869a32f59a935e0611ae9ff4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:07:15.848213    2910 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0911 04:07:15.848223    2910 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0911 04:07:15.848227    2910 cache.go:107] acquiring lock: {Name:mk17129e7c66eb5f316349c56a48101b517f4e21 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:07:15.848029    2910 cache.go:107] acquiring lock: {Name:mk8369bcdd9b846fad76d05d4bf65b5c2f784223 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:07:15.848297    2910 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0911 04:07:15.848214    2910 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0911 04:07:15.848317    2910 cache.go:107] acquiring lock: {Name:mkcce786f1efb2a74aadfe5331db3bdcec6627ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:07:15.848373    2910 cache.go:107] acquiring lock: {Name:mk1ad952c61a26cfb642a14e00c63dc4cfb9d397 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:07:15.848405    2910 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0911 04:07:15.848434    2910 start.go:365] acquiring machines lock for test-preload-785000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:07:15.848458    2910 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0911 04:07:15.848469    2910 start.go:369] acquired machines lock for "test-preload-785000" in 27.083µs
	I0911 04:07:15.848481    2910 start.go:93] Provisioning new machine with config: &{Name:test-preload-785000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-785000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:07:15.848518    2910 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:07:15.848529    2910 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0911 04:07:15.856945    2910 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:07:15.848607    2910 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 04:07:15.862959    2910 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0911 04:07:15.863616    2910 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0911 04:07:15.863665    2910 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0911 04:07:15.863776    2910 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0911 04:07:15.868049    2910 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 04:07:15.868114    2910 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0911 04:07:15.868306    2910 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0911 04:07:15.868405    2910 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0911 04:07:15.872405    2910 start.go:159] libmachine.API.Create for "test-preload-785000" (driver="qemu2")
	I0911 04:07:15.872426    2910 client.go:168] LocalClient.Create starting
	I0911 04:07:15.872492    2910 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:07:15.872517    2910 main.go:141] libmachine: Decoding PEM data...
	I0911 04:07:15.872529    2910 main.go:141] libmachine: Parsing certificate...
	I0911 04:07:15.872566    2910 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:07:15.872586    2910 main.go:141] libmachine: Decoding PEM data...
	I0911 04:07:15.872596    2910 main.go:141] libmachine: Parsing certificate...
	I0911 04:07:15.872888    2910 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:07:16.015906    2910 main.go:141] libmachine: Creating SSH key...
	I0911 04:07:16.140094    2910 main.go:141] libmachine: Creating Disk image...
	I0911 04:07:16.140162    2910 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:07:16.140343    2910 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/test-preload-785000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/test-preload-785000/disk.qcow2
	I0911 04:07:16.149041    2910 main.go:141] libmachine: STDOUT: 
	I0911 04:07:16.149059    2910 main.go:141] libmachine: STDERR: 
	I0911 04:07:16.149134    2910 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/test-preload-785000/disk.qcow2 +20000M
	I0911 04:07:16.156920    2910 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:07:16.156945    2910 main.go:141] libmachine: STDERR: 
	I0911 04:07:16.156960    2910 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/test-preload-785000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/test-preload-785000/disk.qcow2
	I0911 04:07:16.156964    2910 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:07:16.157005    2910 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/test-preload-785000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/test-preload-785000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/test-preload-785000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:68:f5:1f:3b:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/test-preload-785000/disk.qcow2
	I0911 04:07:16.158649    2910 main.go:141] libmachine: STDOUT: 
	I0911 04:07:16.158662    2910 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:07:16.158686    2910 client.go:171] LocalClient.Create took 286.264833ms
	I0911 04:07:16.525727    2910 cache.go:162] opening:  /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0911 04:07:16.573175    2910 cache.go:162] opening:  /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0911 04:07:16.747269    2910 cache.go:162] opening:  /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0911 04:07:16.881637    2910 cache.go:157] /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0911 04:07:16.881654    2910 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.033541333s
	I0911 04:07:16.881663    2910 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0911 04:07:16.990240    2910 cache.go:162] opening:  /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0911 04:07:17.439234    2910 cache.go:162] opening:  /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0911 04:07:17.588759    2910 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0911 04:07:17.588969    2910 cache.go:162] opening:  /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0911 04:07:17.651703    2910 cache.go:162] opening:  /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0911 04:07:17.812716    2910 cache.go:157] /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0911 04:07:17.812734    2910 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.964768833s
	I0911 04:07:17.812748    2910 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	W0911 04:07:17.862346    2910 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0911 04:07:17.862384    2910 cache.go:162] opening:  /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0911 04:07:18.158849    2910 start.go:128] duration metric: createHost completed in 2.310385291s
	I0911 04:07:18.158897    2910 start.go:83] releasing machines lock for "test-preload-785000", held for 2.310495541s
	W0911 04:07:18.158964    2910 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:07:18.167043    2910 out.go:177] * Deleting "test-preload-785000" in qemu2 ...
	W0911 04:07:18.186672    2910 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:07:18.186700    2910 start.go:687] Will try again in 5 seconds ...
	I0911 04:07:18.556646    2910 cache.go:157] /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0911 04:07:18.556692    2910 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.708452834s
	I0911 04:07:18.556721    2910 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0911 04:07:19.707421    2910 cache.go:157] /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0911 04:07:19.707485    2910 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.859556667s
	I0911 04:07:19.707514    2910 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0911 04:07:20.230201    2910 cache.go:157] /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0911 04:07:20.230259    2910 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.382372042s
	I0911 04:07:20.230287    2910 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0911 04:07:21.229005    2910 cache.go:157] /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0911 04:07:21.229053    2910 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.381188292s
	I0911 04:07:21.229085    2910 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0911 04:07:22.671890    2910 cache.go:157] /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0911 04:07:22.671933    2910 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.823921958s
	I0911 04:07:22.671961    2910 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0911 04:07:23.186740    2910 start.go:365] acquiring machines lock for test-preload-785000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:07:23.187154    2910 start.go:369] acquired machines lock for "test-preload-785000" in 328.459µs
	I0911 04:07:23.187273    2910 start.go:93] Provisioning new machine with config: &{Name:test-preload-785000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-785000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:07:23.187554    2910 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:07:23.197190    2910 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:07:23.243689    2910 start.go:159] libmachine.API.Create for "test-preload-785000" (driver="qemu2")
	I0911 04:07:23.243732    2910 client.go:168] LocalClient.Create starting
	I0911 04:07:23.243864    2910 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:07:23.243935    2910 main.go:141] libmachine: Decoding PEM data...
	I0911 04:07:23.243959    2910 main.go:141] libmachine: Parsing certificate...
	I0911 04:07:23.244056    2910 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:07:23.244097    2910 main.go:141] libmachine: Decoding PEM data...
	I0911 04:07:23.244116    2910 main.go:141] libmachine: Parsing certificate...
	I0911 04:07:23.244600    2910 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:07:23.430378    2910 main.go:141] libmachine: Creating SSH key...
	I0911 04:07:23.512997    2910 main.go:141] libmachine: Creating Disk image...
	I0911 04:07:23.513003    2910 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:07:23.513146    2910 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/test-preload-785000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/test-preload-785000/disk.qcow2
	I0911 04:07:23.521663    2910 main.go:141] libmachine: STDOUT: 
	I0911 04:07:23.521678    2910 main.go:141] libmachine: STDERR: 
	I0911 04:07:23.521741    2910 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/test-preload-785000/disk.qcow2 +20000M
	I0911 04:07:23.529047    2910 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:07:23.529061    2910 main.go:141] libmachine: STDERR: 
	I0911 04:07:23.529072    2910 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/test-preload-785000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/test-preload-785000/disk.qcow2
	I0911 04:07:23.529079    2910 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:07:23.529126    2910 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/test-preload-785000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/test-preload-785000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/test-preload-785000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:33:ae:fc:1d:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/test-preload-785000/disk.qcow2
	I0911 04:07:23.530716    2910 main.go:141] libmachine: STDOUT: 
	I0911 04:07:23.530731    2910 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:07:23.530744    2910 client.go:171] LocalClient.Create took 287.01675ms
	I0911 04:07:24.974737    2910 cache.go:157] /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0911 04:07:24.974782    2910 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.126869375s
	I0911 04:07:24.974808    2910 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0911 04:07:24.974862    2910 cache.go:87] Successfully saved all images to host disk.
	I0911 04:07:25.532975    2910 start.go:128] duration metric: createHost completed in 2.345377958s
	I0911 04:07:25.533039    2910 start.go:83] releasing machines lock for "test-preload-785000", held for 2.345934375s
	W0911 04:07:25.533317    2910 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-785000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-785000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:07:25.542920    2910 out.go:177] 
	W0911 04:07:25.547045    2910 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:07:25.547069    2910 out.go:239] * 
	* 
	W0911 04:07:25.549923    2910 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:07:25.559966    2910 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-785000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:522: *** TestPreload FAILED at 2023-09-11 04:07:25.576092 -0700 PDT m=+828.149458626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-785000 -n test-preload-785000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-785000 -n test-preload-785000: exit status 7 (70.638167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-785000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-785000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-785000
--- FAIL: TestPreload (10.01s)

                                                
                                    
x
+
TestScheduledStopUnix (9.81s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-740000 --memory=2048 --driver=qemu2 
E0911 04:07:31.358564    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/client.crt: no such file or directory
E0911 04:07:33.642238    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-740000 --memory=2048 --driver=qemu2 : exit status 80 (9.637336875s)

                                                
                                                
-- stdout --
	* [scheduled-stop-740000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-740000 in cluster scheduled-stop-740000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-740000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-740000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-740000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-740000 in cluster scheduled-stop-740000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-740000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-740000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestScheduledStopUnix FAILED at 2023-09-11 04:07:35.38354 -0700 PDT m=+837.957220626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-740000 -n scheduled-stop-740000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-740000 -n scheduled-stop-740000: exit status 7 (69.1575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-740000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-740000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-740000
--- FAIL: TestScheduledStopUnix (9.81s)

                                                
                                    
x
+
TestSkaffold (11.81s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2955433604 version
skaffold_test.go:63: skaffold version: v2.7.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-671000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-671000 --memory=2600 --driver=qemu2 : exit status 80 (9.816312292s)

                                                
                                                
-- stdout --
	* [skaffold-671000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-671000 in cluster skaffold-671000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-671000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-671000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-671000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-671000 in cluster skaffold-671000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-671000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-671000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestSkaffold FAILED at 2023-09-11 04:07:47.196171 -0700 PDT m=+849.770226542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-671000 -n skaffold-671000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-671000 -n skaffold-671000: exit status 7 (63.643542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-671000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-671000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-671000
--- FAIL: TestSkaffold (11.81s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (159.49s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:107: v1.6.2 release installation failed: bad response code: 404
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-09-11 04:11:06.326134 -0700 PDT m=+1048.906482917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-022000 -n running-upgrade-022000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-022000 -n running-upgrade-022000: exit status 85 (86.039708ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-022000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-022000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-022000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-022000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-022000\"")
helpers_test.go:175: Cleaning up "running-upgrade-022000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-022000
--- FAIL: TestRunningBinaryUpgrade (159.49s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.43s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-387000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
E0911 04:11:11.691733    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-387000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.894110584s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-387000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-387000 in cluster kubernetes-upgrade-387000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-387000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:11:06.671345    3408 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:11:06.671446    3408 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:11:06.671448    3408 out.go:309] Setting ErrFile to fd 2...
	I0911 04:11:06.671450    3408 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:11:06.671559    3408 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:11:06.672550    3408 out.go:303] Setting JSON to false
	I0911 04:11:06.687647    3408 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2440,"bootTime":1694428226,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:11:06.687712    3408 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:11:06.692457    3408 out.go:177] * [kubernetes-upgrade-387000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:11:06.699422    3408 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:11:06.699501    3408 notify.go:220] Checking for updates...
	I0911 04:11:06.703464    3408 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:11:06.706539    3408 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:11:06.709482    3408 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:11:06.712472    3408 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:11:06.715504    3408 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:11:06.718842    3408 config.go:182] Loaded profile config "cert-expiration-402000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:11:06.718910    3408 config.go:182] Loaded profile config "multinode-479000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:11:06.718958    3408 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:11:06.723458    3408 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:11:06.730355    3408 start.go:298] selected driver: qemu2
	I0911 04:11:06.730360    3408 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:11:06.730365    3408 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:11:06.732221    3408 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:11:06.735485    3408 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:11:06.738522    3408 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0911 04:11:06.738540    3408 cni.go:84] Creating CNI manager for ""
	I0911 04:11:06.738546    3408 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0911 04:11:06.738551    3408 start_flags.go:321] config:
	{Name:kubernetes-upgrade-387000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:11:06.742666    3408 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:11:06.750486    3408 out.go:177] * Starting control plane node kubernetes-upgrade-387000 in cluster kubernetes-upgrade-387000
	I0911 04:11:06.754301    3408 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0911 04:11:06.754319    3408 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0911 04:11:06.754328    3408 cache.go:57] Caching tarball of preloaded images
	I0911 04:11:06.754389    3408 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:11:06.754394    3408 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0911 04:11:06.754451    3408 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/kubernetes-upgrade-387000/config.json ...
	I0911 04:11:06.754462    3408 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/kubernetes-upgrade-387000/config.json: {Name:mkd8221bd6b1871b21eca026cf1c3375ba601937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:11:06.754646    3408 start.go:365] acquiring machines lock for kubernetes-upgrade-387000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:11:06.754675    3408 start.go:369] acquired machines lock for "kubernetes-upgrade-387000" in 22.208µs
	I0911 04:11:06.754685    3408 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:11:06.754715    3408 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:11:06.761376    3408 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:11:06.776710    3408 start.go:159] libmachine.API.Create for "kubernetes-upgrade-387000" (driver="qemu2")
	I0911 04:11:06.776740    3408 client.go:168] LocalClient.Create starting
	I0911 04:11:06.776798    3408 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:11:06.776828    3408 main.go:141] libmachine: Decoding PEM data...
	I0911 04:11:06.776842    3408 main.go:141] libmachine: Parsing certificate...
	I0911 04:11:06.776882    3408 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:11:06.776900    3408 main.go:141] libmachine: Decoding PEM data...
	I0911 04:11:06.776909    3408 main.go:141] libmachine: Parsing certificate...
	I0911 04:11:06.777231    3408 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:11:06.957989    3408 main.go:141] libmachine: Creating SSH key...
	I0911 04:11:07.041868    3408 main.go:141] libmachine: Creating Disk image...
	I0911 04:11:07.041873    3408 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:11:07.042029    3408 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/disk.qcow2
	I0911 04:11:07.050507    3408 main.go:141] libmachine: STDOUT: 
	I0911 04:11:07.050528    3408 main.go:141] libmachine: STDERR: 
	I0911 04:11:07.050595    3408 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/disk.qcow2 +20000M
	I0911 04:11:07.057842    3408 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:11:07.057855    3408 main.go:141] libmachine: STDERR: 
	I0911 04:11:07.057872    3408 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/disk.qcow2
	I0911 04:11:07.057888    3408 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:11:07.057930    3408 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:1d:70:84:07:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/disk.qcow2
	I0911 04:11:07.059406    3408 main.go:141] libmachine: STDOUT: 
	I0911 04:11:07.059423    3408 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:11:07.059441    3408 client.go:171] LocalClient.Create took 282.70225ms
	I0911 04:11:09.061534    3408 start.go:128] duration metric: createHost completed in 2.306874625s
	I0911 04:11:09.061782    3408 start.go:83] releasing machines lock for "kubernetes-upgrade-387000", held for 2.307170417s
	W0911 04:11:09.061837    3408 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:11:09.070082    3408 out.go:177] * Deleting "kubernetes-upgrade-387000" in qemu2 ...
	W0911 04:11:09.090194    3408 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:11:09.090239    3408 start.go:687] Will try again in 5 seconds ...
	I0911 04:11:14.092244    3408 start.go:365] acquiring machines lock for kubernetes-upgrade-387000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:11:14.092630    3408 start.go:369] acquired machines lock for "kubernetes-upgrade-387000" in 302.375µs
	I0911 04:11:14.092738    3408 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:11:14.093073    3408 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:11:14.102700    3408 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:11:14.148345    3408 start.go:159] libmachine.API.Create for "kubernetes-upgrade-387000" (driver="qemu2")
	I0911 04:11:14.148388    3408 client.go:168] LocalClient.Create starting
	I0911 04:11:14.148510    3408 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:11:14.148581    3408 main.go:141] libmachine: Decoding PEM data...
	I0911 04:11:14.148605    3408 main.go:141] libmachine: Parsing certificate...
	I0911 04:11:14.148685    3408 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:11:14.148727    3408 main.go:141] libmachine: Decoding PEM data...
	I0911 04:11:14.148746    3408 main.go:141] libmachine: Parsing certificate...
	I0911 04:11:14.149314    3408 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:11:14.274952    3408 main.go:141] libmachine: Creating SSH key...
	I0911 04:11:14.478205    3408 main.go:141] libmachine: Creating Disk image...
	I0911 04:11:14.478211    3408 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:11:14.478371    3408 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/disk.qcow2
	I0911 04:11:14.487049    3408 main.go:141] libmachine: STDOUT: 
	I0911 04:11:14.487063    3408 main.go:141] libmachine: STDERR: 
	I0911 04:11:14.487117    3408 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/disk.qcow2 +20000M
	I0911 04:11:14.494375    3408 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:11:14.494392    3408 main.go:141] libmachine: STDERR: 
	I0911 04:11:14.494410    3408 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/disk.qcow2
	I0911 04:11:14.494421    3408 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:11:14.494465    3408 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:e8:1e:3d:bd:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/disk.qcow2
	I0911 04:11:14.495998    3408 main.go:141] libmachine: STDOUT: 
	I0911 04:11:14.496017    3408 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:11:14.496028    3408 client.go:171] LocalClient.Create took 347.646167ms
	I0911 04:11:16.498145    3408 start.go:128] duration metric: createHost completed in 2.405098959s
	I0911 04:11:16.498196    3408 start.go:83] releasing machines lock for "kubernetes-upgrade-387000", held for 2.405616875s
	W0911 04:11:16.498638    3408 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-387000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-387000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:11:16.508268    3408 out.go:177] 
	W0911 04:11:16.512409    3408 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:11:16.512480    3408 out.go:239] * 
	* 
	W0911 04:11:16.515062    3408 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:11:16.524166    3408 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:237: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-387000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-387000
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-387000 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-387000 status --format={{.Host}}: exit status 7 (39.941916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-387000 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-387000 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.178818041s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-387000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-387000 in cluster kubernetes-upgrade-387000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-387000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-387000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:11:16.708937    3426 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:11:16.709037    3426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:11:16.709039    3426 out.go:309] Setting ErrFile to fd 2...
	I0911 04:11:16.709042    3426 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:11:16.709156    3426 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:11:16.710083    3426 out.go:303] Setting JSON to false
	I0911 04:11:16.725431    3426 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2450,"bootTime":1694428226,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:11:16.725496    3426 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:11:16.730002    3426 out.go:177] * [kubernetes-upgrade-387000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:11:16.735827    3426 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:11:16.735875    3426 notify.go:220] Checking for updates...
	I0911 04:11:16.739817    3426 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:11:16.743873    3426 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:11:16.746838    3426 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:11:16.749796    3426 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:11:16.752848    3426 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:11:16.756001    3426 config.go:182] Loaded profile config "kubernetes-upgrade-387000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0911 04:11:16.756241    3426 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:11:16.760842    3426 out.go:177] * Using the qemu2 driver based on existing profile
	I0911 04:11:16.766752    3426 start.go:298] selected driver: qemu2
	I0911 04:11:16.766757    3426 start.go:902] validating driver "qemu2" against &{Name:kubernetes-upgrade-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-387000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:11:16.766817    3426 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:11:16.768897    3426 cni.go:84] Creating CNI manager for ""
	I0911 04:11:16.768918    3426 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:11:16.768923    3426 start_flags.go:321] config:
	{Name:kubernetes-upgrade-387000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kubernetes-upgrade-387000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:11:16.772873    3426 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:11:16.780809    3426 out.go:177] * Starting control plane node kubernetes-upgrade-387000 in cluster kubernetes-upgrade-387000
	I0911 04:11:16.784787    3426 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:11:16.784805    3426 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:11:16.784820    3426 cache.go:57] Caching tarball of preloaded images
	I0911 04:11:16.784869    3426 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:11:16.784875    3426 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:11:16.784935    3426 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/kubernetes-upgrade-387000/config.json ...
	I0911 04:11:16.785313    3426 start.go:365] acquiring machines lock for kubernetes-upgrade-387000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:11:16.785342    3426 start.go:369] acquired machines lock for "kubernetes-upgrade-387000" in 20.375µs
	I0911 04:11:16.785352    3426 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:11:16.785357    3426 fix.go:54] fixHost starting: 
	I0911 04:11:16.785475    3426 fix.go:102] recreateIfNeeded on kubernetes-upgrade-387000: state=Stopped err=<nil>
	W0911 04:11:16.785483    3426 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:11:16.793850    3426 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-387000" ...
	I0911 04:11:16.797841    3426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:e8:1e:3d:bd:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/disk.qcow2
	I0911 04:11:16.799809    3426 main.go:141] libmachine: STDOUT: 
	I0911 04:11:16.799825    3426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:11:16.799852    3426 fix.go:56] fixHost completed within 14.494ms
	I0911 04:11:16.799857    3426 start.go:83] releasing machines lock for "kubernetes-upgrade-387000", held for 14.511458ms
	W0911 04:11:16.799863    3426 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:11:16.799899    3426 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:11:16.799903    3426 start.go:687] Will try again in 5 seconds ...
	I0911 04:11:21.801837    3426 start.go:365] acquiring machines lock for kubernetes-upgrade-387000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:11:21.802162    3426 start.go:369] acquired machines lock for "kubernetes-upgrade-387000" in 255.583µs
	I0911 04:11:21.802295    3426 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:11:21.802313    3426 fix.go:54] fixHost starting: 
	I0911 04:11:21.803095    3426 fix.go:102] recreateIfNeeded on kubernetes-upgrade-387000: state=Stopped err=<nil>
	W0911 04:11:21.803123    3426 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:11:21.807274    3426 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-387000" ...
	I0911 04:11:21.811559    3426 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:e8:1e:3d:bd:ac -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubernetes-upgrade-387000/disk.qcow2
	I0911 04:11:21.819832    3426 main.go:141] libmachine: STDOUT: 
	I0911 04:11:21.819891    3426 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:11:21.819962    3426 fix.go:56] fixHost completed within 17.650375ms
	I0911 04:11:21.819979    3426 start.go:83] releasing machines lock for "kubernetes-upgrade-387000", held for 17.795917ms
	W0911 04:11:21.820192    3426 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-387000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-387000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:11:21.832462    3426 out.go:177] 
	W0911 04:11:21.836487    3426 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:11:21.836518    3426 out.go:239] * 
	* 
	W0911 04:11:21.839214    3426 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:11:21.847469    3426 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:258: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-387000 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-387000 version --output=json
version_upgrade_test.go:261: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-387000 version --output=json: exit status 1 (65.895625ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-387000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:263: error running kubectl: exit status 1
panic.go:522: *** TestKubernetesUpgrade FAILED at 2023-09-11 04:11:21.927474 -0700 PDT m=+1064.508315334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-387000 -n kubernetes-upgrade-387000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-387000 -n kubernetes-upgrade-387000: exit status 7 (32.196083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-387000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-387000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-387000
--- FAIL: TestKubernetesUpgrade (15.43s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.93s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17223
- KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3634485292/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.93s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.65s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.2 on darwin (arm64)
- MINIKUBE_LOCATION=17223
- KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2410180322/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (156.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
version_upgrade_test.go:168: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (156.94s)

                                                
                                    
x
+
TestPause/serial/Start (9.88s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-741000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-741000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.811825916s)

                                                
                                                
-- stdout --
	* [pause-741000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-741000 in cluster pause-741000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-741000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-741000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-741000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-741000 -n pause-741000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-741000 -n pause-741000: exit status 7 (70.005ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-741000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-657000 --driver=qemu2 
E0911 04:11:39.399507    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/ingress-addon-legacy-937000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-657000 --driver=qemu2 : exit status 80 (9.782145833s)

                                                
                                                
-- stdout --
	* [NoKubernetes-657000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-657000 in cluster NoKubernetes-657000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-657000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-657000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-657000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-657000 -n NoKubernetes-657000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-657000 -n NoKubernetes-657000: exit status 7 (69.161083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-657000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-657000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-657000 --no-kubernetes --driver=qemu2 : exit status 80 (5.233519625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-657000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-657000
	* Restarting existing qemu2 VM for "NoKubernetes-657000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-657000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-657000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-657000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-657000 -n NoKubernetes-657000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-657000 -n NoKubernetes-657000: exit status 7 (67.537208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-657000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-657000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-657000 --no-kubernetes --driver=qemu2 : exit status 80 (5.243833166s)

                                                
                                                
-- stdout --
	* [NoKubernetes-657000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-657000
	* Restarting existing qemu2 VM for "NoKubernetes-657000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-657000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-657000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-657000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-657000 -n NoKubernetes-657000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-657000 -n NoKubernetes-657000: exit status 7 (69.635458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-657000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-657000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-657000 --driver=qemu2 : exit status 80 (5.242074417s)

                                                
                                                
-- stdout --
	* [NoKubernetes-657000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-657000
	* Restarting existing qemu2 VM for "NoKubernetes-657000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-657000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-657000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-657000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-657000 -n NoKubernetes-657000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-657000 -n NoKubernetes-657000: exit status 7 (71.511917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-657000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-687000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-687000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.684803584s)

                                                
                                                
-- stdout --
	* [auto-687000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-687000 in cluster auto-687000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-687000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:12:03.551677    3548 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:12:03.551784    3548 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:12:03.551787    3548 out.go:309] Setting ErrFile to fd 2...
	I0911 04:12:03.551789    3548 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:12:03.551912    3548 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:12:03.552888    3548 out.go:303] Setting JSON to false
	I0911 04:12:03.567855    3548 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2497,"bootTime":1694428226,"procs":418,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:12:03.567912    3548 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:12:03.571750    3548 out.go:177] * [auto-687000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:12:03.579794    3548 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:12:03.583734    3548 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:12:03.579940    3548 notify.go:220] Checking for updates...
	I0911 04:12:03.589800    3548 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:12:03.592757    3548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:12:03.595822    3548 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:12:03.598795    3548 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:12:03.602015    3548 config.go:182] Loaded profile config "multinode-479000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:12:03.602056    3548 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:12:03.605757    3548 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:12:03.612628    3548 start.go:298] selected driver: qemu2
	I0911 04:12:03.612635    3548 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:12:03.612640    3548 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:12:03.614564    3548 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:12:03.617779    3548 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:12:03.620860    3548 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:12:03.620881    3548 cni.go:84] Creating CNI manager for ""
	I0911 04:12:03.620890    3548 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:12:03.620895    3548 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:12:03.620900    3548 start_flags.go:321] config:
	{Name:auto-687000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:auto-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0
AutoPauseInterval:1m0s}
	I0911 04:12:03.625357    3548 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:12:03.633745    3548 out.go:177] * Starting control plane node auto-687000 in cluster auto-687000
	I0911 04:12:03.636743    3548 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:12:03.636764    3548 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:12:03.636780    3548 cache.go:57] Caching tarball of preloaded images
	I0911 04:12:03.636849    3548 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:12:03.636863    3548 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:12:03.636949    3548 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/auto-687000/config.json ...
	I0911 04:12:03.636961    3548 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/auto-687000/config.json: {Name:mkdb40ee4ae320ec35de1a383d199a1f8c0c0d2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:12:03.637166    3548 start.go:365] acquiring machines lock for auto-687000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:12:03.637196    3548 start.go:369] acquired machines lock for "auto-687000" in 23.791µs
	I0911 04:12:03.637206    3548 start.go:93] Provisioning new machine with config: &{Name:auto-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:auto-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:12:03.637245    3548 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:12:03.641778    3548 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:12:03.656677    3548 start.go:159] libmachine.API.Create for "auto-687000" (driver="qemu2")
	I0911 04:12:03.656702    3548 client.go:168] LocalClient.Create starting
	I0911 04:12:03.656752    3548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:12:03.656776    3548 main.go:141] libmachine: Decoding PEM data...
	I0911 04:12:03.656790    3548 main.go:141] libmachine: Parsing certificate...
	I0911 04:12:03.656829    3548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:12:03.656846    3548 main.go:141] libmachine: Decoding PEM data...
	I0911 04:12:03.656853    3548 main.go:141] libmachine: Parsing certificate...
	I0911 04:12:03.657177    3548 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:12:03.768552    3548 main.go:141] libmachine: Creating SSH key...
	I0911 04:12:03.875543    3548 main.go:141] libmachine: Creating Disk image...
	I0911 04:12:03.875548    3548 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:12:03.875679    3548 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/auto-687000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/auto-687000/disk.qcow2
	I0911 04:12:03.884277    3548 main.go:141] libmachine: STDOUT: 
	I0911 04:12:03.884289    3548 main.go:141] libmachine: STDERR: 
	I0911 04:12:03.884368    3548 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/auto-687000/disk.qcow2 +20000M
	I0911 04:12:03.891621    3548 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:12:03.891635    3548 main.go:141] libmachine: STDERR: 
	I0911 04:12:03.891646    3548 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/auto-687000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/auto-687000/disk.qcow2
	I0911 04:12:03.891652    3548 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:12:03.891704    3548 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/auto-687000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/auto-687000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/auto-687000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:56:c3:c3:fd:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/auto-687000/disk.qcow2
	I0911 04:12:03.893199    3548 main.go:141] libmachine: STDOUT: 
	I0911 04:12:03.893217    3548 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:12:03.893238    3548 client.go:171] LocalClient.Create took 236.536125ms
	I0911 04:12:05.895382    3548 start.go:128] duration metric: createHost completed in 2.258191958s
	I0911 04:12:05.895433    3548 start.go:83] releasing machines lock for "auto-687000", held for 2.258295958s
	W0911 04:12:05.895484    3548 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:12:05.903979    3548 out.go:177] * Deleting "auto-687000" in qemu2 ...
	W0911 04:12:05.927573    3548 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:12:05.927607    3548 start.go:687] Will try again in 5 seconds ...
	I0911 04:12:10.929702    3548 start.go:365] acquiring machines lock for auto-687000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:12:10.930288    3548 start.go:369] acquired machines lock for "auto-687000" in 469.375µs
	I0911 04:12:10.930445    3548 start.go:93] Provisioning new machine with config: &{Name:auto-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:auto-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:12:10.930783    3548 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:12:10.940537    3548 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:12:10.986520    3548 start.go:159] libmachine.API.Create for "auto-687000" (driver="qemu2")
	I0911 04:12:10.986580    3548 client.go:168] LocalClient.Create starting
	I0911 04:12:10.986702    3548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:12:10.986788    3548 main.go:141] libmachine: Decoding PEM data...
	I0911 04:12:10.986805    3548 main.go:141] libmachine: Parsing certificate...
	I0911 04:12:10.986865    3548 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:12:10.986901    3548 main.go:141] libmachine: Decoding PEM data...
	I0911 04:12:10.986916    3548 main.go:141] libmachine: Parsing certificate...
	I0911 04:12:10.987384    3548 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:12:11.113528    3548 main.go:141] libmachine: Creating SSH key...
	I0911 04:12:11.150248    3548 main.go:141] libmachine: Creating Disk image...
	I0911 04:12:11.150253    3548 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:12:11.150390    3548 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/auto-687000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/auto-687000/disk.qcow2
	I0911 04:12:11.158887    3548 main.go:141] libmachine: STDOUT: 
	I0911 04:12:11.158900    3548 main.go:141] libmachine: STDERR: 
	I0911 04:12:11.158954    3548 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/auto-687000/disk.qcow2 +20000M
	I0911 04:12:11.166038    3548 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:12:11.166050    3548 main.go:141] libmachine: STDERR: 
	I0911 04:12:11.166063    3548 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/auto-687000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/auto-687000/disk.qcow2
	I0911 04:12:11.166067    3548 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:12:11.166115    3548 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/auto-687000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/auto-687000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/auto-687000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:7e:c6:f7:e8:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/auto-687000/disk.qcow2
	I0911 04:12:11.167567    3548 main.go:141] libmachine: STDOUT: 
	I0911 04:12:11.167582    3548 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:12:11.167593    3548 client.go:171] LocalClient.Create took 181.013375ms
	I0911 04:12:13.169713    3548 start.go:128] duration metric: createHost completed in 2.238970584s
	I0911 04:12:13.169765    3548 start.go:83] releasing machines lock for "auto-687000", held for 2.239524208s
	W0911 04:12:13.170095    3548 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-687000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-687000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:12:13.178686    3548 out.go:177] 
	W0911 04:12:13.182781    3548 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:12:13.182917    3548 out.go:239] * 
	* 
	W0911 04:12:13.185659    3548 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:12:13.194735    3548 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-687000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-687000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.738252375s)

                                                
                                                
-- stdout --
	* [kindnet-687000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-687000 in cluster kindnet-687000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-687000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:12:15.315602    3658 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:12:15.315725    3658 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:12:15.315729    3658 out.go:309] Setting ErrFile to fd 2...
	I0911 04:12:15.315732    3658 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:12:15.315847    3658 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:12:15.316919    3658 out.go:303] Setting JSON to false
	I0911 04:12:15.332112    3658 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2509,"bootTime":1694428226,"procs":417,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:12:15.332187    3658 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:12:15.337129    3658 out.go:177] * [kindnet-687000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:12:15.345118    3658 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:12:15.349030    3658 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:12:15.345190    3658 notify.go:220] Checking for updates...
	I0911 04:12:15.355086    3658 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:12:15.358059    3658 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:12:15.359487    3658 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:12:15.362026    3658 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:12:15.365467    3658 config.go:182] Loaded profile config "multinode-479000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:12:15.365507    3658 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:12:15.369876    3658 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:12:15.377034    3658 start.go:298] selected driver: qemu2
	I0911 04:12:15.377039    3658 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:12:15.377044    3658 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:12:15.379005    3658 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:12:15.382113    3658 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:12:15.385163    3658 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:12:15.385196    3658 cni.go:84] Creating CNI manager for "kindnet"
	I0911 04:12:15.385201    3658 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0911 04:12:15.385205    3658 start_flags.go:321] config:
	{Name:kindnet-687000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kindnet-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:12:15.389769    3658 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:12:15.398081    3658 out.go:177] * Starting control plane node kindnet-687000 in cluster kindnet-687000
	I0911 04:12:15.402037    3658 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:12:15.402058    3658 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:12:15.402073    3658 cache.go:57] Caching tarball of preloaded images
	I0911 04:12:15.402138    3658 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:12:15.402144    3658 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:12:15.402209    3658 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/kindnet-687000/config.json ...
	I0911 04:12:15.402222    3658 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/kindnet-687000/config.json: {Name:mkf7d4c14e28cfc93461841d6ede3131109780a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:12:15.402443    3658 start.go:365] acquiring machines lock for kindnet-687000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:12:15.402473    3658 start.go:369] acquired machines lock for "kindnet-687000" in 24.292µs
	I0911 04:12:15.402484    3658 start.go:93] Provisioning new machine with config: &{Name:kindnet-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kindnet-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:12:15.402515    3658 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:12:15.411059    3658 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:12:15.427384    3658 start.go:159] libmachine.API.Create for "kindnet-687000" (driver="qemu2")
	I0911 04:12:15.427403    3658 client.go:168] LocalClient.Create starting
	I0911 04:12:15.427460    3658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:12:15.427489    3658 main.go:141] libmachine: Decoding PEM data...
	I0911 04:12:15.427500    3658 main.go:141] libmachine: Parsing certificate...
	I0911 04:12:15.427547    3658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:12:15.427567    3658 main.go:141] libmachine: Decoding PEM data...
	I0911 04:12:15.427578    3658 main.go:141] libmachine: Parsing certificate...
	I0911 04:12:15.427910    3658 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:12:15.543881    3658 main.go:141] libmachine: Creating SSH key...
	I0911 04:12:15.674594    3658 main.go:141] libmachine: Creating Disk image...
	I0911 04:12:15.674600    3658 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:12:15.674740    3658 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kindnet-687000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kindnet-687000/disk.qcow2
	I0911 04:12:15.683233    3658 main.go:141] libmachine: STDOUT: 
	I0911 04:12:15.683246    3658 main.go:141] libmachine: STDERR: 
	I0911 04:12:15.683308    3658 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kindnet-687000/disk.qcow2 +20000M
	I0911 04:12:15.690364    3658 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:12:15.690377    3658 main.go:141] libmachine: STDERR: 
	I0911 04:12:15.690399    3658 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kindnet-687000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kindnet-687000/disk.qcow2
	I0911 04:12:15.690409    3658 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:12:15.690442    3658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kindnet-687000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kindnet-687000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kindnet-687000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:59:32:fc:31:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kindnet-687000/disk.qcow2
	I0911 04:12:15.691919    3658 main.go:141] libmachine: STDOUT: 
	I0911 04:12:15.691931    3658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:12:15.691952    3658 client.go:171] LocalClient.Create took 264.552042ms
	I0911 04:12:17.694085    3658 start.go:128] duration metric: createHost completed in 2.291617s
	I0911 04:12:17.694176    3658 start.go:83] releasing machines lock for "kindnet-687000", held for 2.291765083s
	W0911 04:12:17.694277    3658 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:12:17.705709    3658 out.go:177] * Deleting "kindnet-687000" in qemu2 ...
	W0911 04:12:17.726560    3658 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:12:17.726590    3658 start.go:687] Will try again in 5 seconds ...
	I0911 04:12:22.728740    3658 start.go:365] acquiring machines lock for kindnet-687000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:12:22.729187    3658 start.go:369] acquired machines lock for "kindnet-687000" in 346.5µs
	I0911 04:12:22.729323    3658 start.go:93] Provisioning new machine with config: &{Name:kindnet-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kindnet-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:12:22.729569    3658 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:12:22.738325    3658 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:12:22.788666    3658 start.go:159] libmachine.API.Create for "kindnet-687000" (driver="qemu2")
	I0911 04:12:22.788722    3658 client.go:168] LocalClient.Create starting
	I0911 04:12:22.788849    3658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:12:22.788906    3658 main.go:141] libmachine: Decoding PEM data...
	I0911 04:12:22.788936    3658 main.go:141] libmachine: Parsing certificate...
	I0911 04:12:22.789001    3658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:12:22.789041    3658 main.go:141] libmachine: Decoding PEM data...
	I0911 04:12:22.789053    3658 main.go:141] libmachine: Parsing certificate...
	I0911 04:12:22.789602    3658 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:12:22.911028    3658 main.go:141] libmachine: Creating SSH key...
	I0911 04:12:22.965894    3658 main.go:141] libmachine: Creating Disk image...
	I0911 04:12:22.965899    3658 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:12:22.966032    3658 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kindnet-687000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kindnet-687000/disk.qcow2
	I0911 04:12:22.974552    3658 main.go:141] libmachine: STDOUT: 
	I0911 04:12:22.974567    3658 main.go:141] libmachine: STDERR: 
	I0911 04:12:22.974625    3658 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kindnet-687000/disk.qcow2 +20000M
	I0911 04:12:22.981728    3658 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:12:22.981749    3658 main.go:141] libmachine: STDERR: 
	I0911 04:12:22.981763    3658 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kindnet-687000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kindnet-687000/disk.qcow2
	I0911 04:12:22.981774    3658 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:12:22.981814    3658 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kindnet-687000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kindnet-687000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kindnet-687000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:a5:3e:14:7c:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kindnet-687000/disk.qcow2
	I0911 04:12:22.983354    3658 main.go:141] libmachine: STDOUT: 
	I0911 04:12:22.983367    3658 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:12:22.983379    3658 client.go:171] LocalClient.Create took 194.656875ms
	I0911 04:12:24.985464    3658 start.go:128] duration metric: createHost completed in 2.255943958s
	I0911 04:12:24.985531    3658 start.go:83] releasing machines lock for "kindnet-687000", held for 2.256390458s
	W0911 04:12:24.985926    3658 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-687000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-687000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:12:24.996636    3658 out.go:177] 
	W0911 04:12:25.000616    3658 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:12:25.000638    3658 out.go:239] * 
	* 
	W0911 04:12:25.003320    3658 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:12:25.012618    3658 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-687000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
E0911 04:12:31.349057    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-687000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.747882417s)

                                                
                                                
-- stdout --
	* [calico-687000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-687000 in cluster calico-687000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-687000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:12:27.224302    3772 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:12:27.224418    3772 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:12:27.224420    3772 out.go:309] Setting ErrFile to fd 2...
	I0911 04:12:27.224423    3772 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:12:27.224537    3772 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:12:27.225513    3772 out.go:303] Setting JSON to false
	I0911 04:12:27.241571    3772 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2521,"bootTime":1694428226,"procs":415,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:12:27.241638    3772 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:12:27.245555    3772 out.go:177] * [calico-687000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:12:27.252611    3772 notify.go:220] Checking for updates...
	I0911 04:12:27.256503    3772 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:12:27.259506    3772 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:12:27.262557    3772 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:12:27.265453    3772 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:12:27.268525    3772 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:12:27.271496    3772 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:12:27.274676    3772 config.go:182] Loaded profile config "multinode-479000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:12:27.274718    3772 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:12:27.278535    3772 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:12:27.285471    3772 start.go:298] selected driver: qemu2
	I0911 04:12:27.285478    3772 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:12:27.285484    3772 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:12:27.287604    3772 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:12:27.290515    3772 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:12:27.293615    3772 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:12:27.293637    3772 cni.go:84] Creating CNI manager for "calico"
	I0911 04:12:27.293640    3772 start_flags.go:316] Found "Calico" CNI - setting NetworkPlugin=cni
	I0911 04:12:27.293647    3772 start_flags.go:321] config:
	{Name:calico-687000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:calico-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0911 04:12:27.298088    3772 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:12:27.306495    3772 out.go:177] * Starting control plane node calico-687000 in cluster calico-687000
	I0911 04:12:27.310287    3772 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:12:27.310304    3772 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:12:27.310328    3772 cache.go:57] Caching tarball of preloaded images
	I0911 04:12:27.310387    3772 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:12:27.310392    3772 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:12:27.310469    3772 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/calico-687000/config.json ...
	I0911 04:12:27.310481    3772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/calico-687000/config.json: {Name:mk0d4920ab61b8ff8fe7f80424dead372e9eb0cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:12:27.310692    3772 start.go:365] acquiring machines lock for calico-687000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:12:27.310725    3772 start.go:369] acquired machines lock for "calico-687000" in 28.25µs
	I0911 04:12:27.310736    3772 start.go:93] Provisioning new machine with config: &{Name:calico-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:calico-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:12:27.310771    3772 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:12:27.318331    3772 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:12:27.334146    3772 start.go:159] libmachine.API.Create for "calico-687000" (driver="qemu2")
	I0911 04:12:27.334166    3772 client.go:168] LocalClient.Create starting
	I0911 04:12:27.334248    3772 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:12:27.334289    3772 main.go:141] libmachine: Decoding PEM data...
	I0911 04:12:27.334302    3772 main.go:141] libmachine: Parsing certificate...
	I0911 04:12:27.334346    3772 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:12:27.334366    3772 main.go:141] libmachine: Decoding PEM data...
	I0911 04:12:27.334376    3772 main.go:141] libmachine: Parsing certificate...
	I0911 04:12:27.334734    3772 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:12:27.452550    3772 main.go:141] libmachine: Creating SSH key...
	I0911 04:12:27.536622    3772 main.go:141] libmachine: Creating Disk image...
	I0911 04:12:27.536628    3772 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:12:27.536759    3772 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/calico-687000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/calico-687000/disk.qcow2
	I0911 04:12:27.545280    3772 main.go:141] libmachine: STDOUT: 
	I0911 04:12:27.545292    3772 main.go:141] libmachine: STDERR: 
	I0911 04:12:27.545363    3772 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/calico-687000/disk.qcow2 +20000M
	I0911 04:12:27.552530    3772 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:12:27.552543    3772 main.go:141] libmachine: STDERR: 
	I0911 04:12:27.552562    3772 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/calico-687000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/calico-687000/disk.qcow2
	I0911 04:12:27.552567    3772 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:12:27.552595    3772 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/calico-687000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/calico-687000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/calico-687000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:84:93:eb:7b:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/calico-687000/disk.qcow2
	I0911 04:12:27.554121    3772 main.go:141] libmachine: STDOUT: 
	I0911 04:12:27.554134    3772 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:12:27.554159    3772 client.go:171] LocalClient.Create took 219.995541ms
	I0911 04:12:29.556248    3772 start.go:128] duration metric: createHost completed in 2.245531708s
	I0911 04:12:29.556311    3772 start.go:83] releasing machines lock for "calico-687000", held for 2.245646458s
	W0911 04:12:29.556371    3772 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:12:29.564933    3772 out.go:177] * Deleting "calico-687000" in qemu2 ...
	W0911 04:12:29.585523    3772 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:12:29.585556    3772 start.go:687] Will try again in 5 seconds ...
	I0911 04:12:34.587586    3772 start.go:365] acquiring machines lock for calico-687000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:12:34.588019    3772 start.go:369] acquired machines lock for "calico-687000" in 341.708µs
	I0911 04:12:34.588131    3772 start.go:93] Provisioning new machine with config: &{Name:calico-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:calico-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:12:34.588505    3772 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:12:34.598196    3772 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:12:34.646957    3772 start.go:159] libmachine.API.Create for "calico-687000" (driver="qemu2")
	I0911 04:12:34.647001    3772 client.go:168] LocalClient.Create starting
	I0911 04:12:34.647149    3772 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:12:34.647199    3772 main.go:141] libmachine: Decoding PEM data...
	I0911 04:12:34.647214    3772 main.go:141] libmachine: Parsing certificate...
	I0911 04:12:34.647284    3772 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:12:34.647317    3772 main.go:141] libmachine: Decoding PEM data...
	I0911 04:12:34.647332    3772 main.go:141] libmachine: Parsing certificate...
	I0911 04:12:34.647805    3772 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:12:34.776825    3772 main.go:141] libmachine: Creating SSH key...
	I0911 04:12:34.888294    3772 main.go:141] libmachine: Creating Disk image...
	I0911 04:12:34.888299    3772 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:12:34.888445    3772 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/calico-687000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/calico-687000/disk.qcow2
	I0911 04:12:34.897173    3772 main.go:141] libmachine: STDOUT: 
	I0911 04:12:34.897189    3772 main.go:141] libmachine: STDERR: 
	I0911 04:12:34.897249    3772 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/calico-687000/disk.qcow2 +20000M
	I0911 04:12:34.904539    3772 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:12:34.904552    3772 main.go:141] libmachine: STDERR: 
	I0911 04:12:34.904570    3772 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/calico-687000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/calico-687000/disk.qcow2
	I0911 04:12:34.904578    3772 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:12:34.904608    3772 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/calico-687000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/calico-687000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/calico-687000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:61:a9:95:1f:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/calico-687000/disk.qcow2
	I0911 04:12:34.906122    3772 main.go:141] libmachine: STDOUT: 
	I0911 04:12:34.906133    3772 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:12:34.906144    3772 client.go:171] LocalClient.Create took 259.146ms
	I0911 04:12:36.908242    3772 start.go:128] duration metric: createHost completed in 2.319781667s
	I0911 04:12:36.908302    3772 start.go:83] releasing machines lock for "calico-687000", held for 2.320334375s
	W0911 04:12:36.908822    3772 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-687000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-687000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:12:36.916424    3772 out.go:177] 
	W0911 04:12:36.920527    3772 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:12:36.920549    3772 out.go:239] * 
	* 
	W0911 04:12:36.923060    3772 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:12:36.930382    3772 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-687000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-687000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.925551541s)

                                                
                                                
-- stdout --
	* [custom-flannel-687000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-687000 in cluster custom-flannel-687000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-687000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:12:39.289430    3890 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:12:39.289553    3890 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:12:39.289555    3890 out.go:309] Setting ErrFile to fd 2...
	I0911 04:12:39.289558    3890 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:12:39.289661    3890 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:12:39.290619    3890 out.go:303] Setting JSON to false
	I0911 04:12:39.305414    3890 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2533,"bootTime":1694428226,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:12:39.305468    3890 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:12:39.310801    3890 out.go:177] * [custom-flannel-687000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:12:39.318752    3890 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:12:39.322712    3890 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:12:39.318746    3890 notify.go:220] Checking for updates...
	I0911 04:12:39.328721    3890 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:12:39.331765    3890 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:12:39.334662    3890 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:12:39.337742    3890 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:12:39.341538    3890 config.go:182] Loaded profile config "multinode-479000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:12:39.341588    3890 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:12:39.345725    3890 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:12:39.352733    3890 start.go:298] selected driver: qemu2
	I0911 04:12:39.352738    3890 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:12:39.352744    3890 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:12:39.354717    3890 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:12:39.357709    3890 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:12:39.360798    3890 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:12:39.360817    3890 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0911 04:12:39.360828    3890 start_flags.go:316] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0911 04:12:39.360835    3890 start_flags.go:321] config:
	{Name:custom-flannel-687000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:12:39.365220    3890 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:12:39.373732    3890 out.go:177] * Starting control plane node custom-flannel-687000 in cluster custom-flannel-687000
	I0911 04:12:39.377751    3890 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:12:39.377773    3890 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:12:39.377796    3890 cache.go:57] Caching tarball of preloaded images
	I0911 04:12:39.377864    3890 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:12:39.377869    3890 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:12:39.377939    3890 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/custom-flannel-687000/config.json ...
	I0911 04:12:39.377950    3890 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/custom-flannel-687000/config.json: {Name:mkb6b1c70d0f13c8d2c6718684c321b98e023302 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:12:39.378156    3890 start.go:365] acquiring machines lock for custom-flannel-687000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:12:39.378189    3890 start.go:369] acquired machines lock for "custom-flannel-687000" in 23.791µs
	I0911 04:12:39.378200    3890 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:12:39.378232    3890 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:12:39.386716    3890 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:12:39.402507    3890 start.go:159] libmachine.API.Create for "custom-flannel-687000" (driver="qemu2")
	I0911 04:12:39.402539    3890 client.go:168] LocalClient.Create starting
	I0911 04:12:39.402586    3890 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:12:39.402609    3890 main.go:141] libmachine: Decoding PEM data...
	I0911 04:12:39.402625    3890 main.go:141] libmachine: Parsing certificate...
	I0911 04:12:39.402672    3890 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:12:39.402691    3890 main.go:141] libmachine: Decoding PEM data...
	I0911 04:12:39.402700    3890 main.go:141] libmachine: Parsing certificate...
	I0911 04:12:39.403000    3890 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:12:39.519710    3890 main.go:141] libmachine: Creating SSH key...
	I0911 04:12:39.636796    3890 main.go:141] libmachine: Creating Disk image...
	I0911 04:12:39.636801    3890 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:12:39.636939    3890 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/custom-flannel-687000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/custom-flannel-687000/disk.qcow2
	I0911 04:12:39.645401    3890 main.go:141] libmachine: STDOUT: 
	I0911 04:12:39.645418    3890 main.go:141] libmachine: STDERR: 
	I0911 04:12:39.645483    3890 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/custom-flannel-687000/disk.qcow2 +20000M
	I0911 04:12:39.652701    3890 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:12:39.652714    3890 main.go:141] libmachine: STDERR: 
	I0911 04:12:39.652739    3890 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/custom-flannel-687000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/custom-flannel-687000/disk.qcow2
	I0911 04:12:39.652747    3890 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:12:39.652794    3890 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/custom-flannel-687000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/custom-flannel-687000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/custom-flannel-687000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:5b:9e:1e:4a:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/custom-flannel-687000/disk.qcow2
	I0911 04:12:39.654326    3890 main.go:141] libmachine: STDOUT: 
	I0911 04:12:39.654345    3890 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:12:39.654371    3890 client.go:171] LocalClient.Create took 251.833667ms
	I0911 04:12:41.656562    3890 start.go:128] duration metric: createHost completed in 2.278381792s
	I0911 04:12:41.656610    3890 start.go:83] releasing machines lock for "custom-flannel-687000", held for 2.278479792s
	W0911 04:12:41.656664    3890 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:12:41.665134    3890 out.go:177] * Deleting "custom-flannel-687000" in qemu2 ...
	W0911 04:12:41.685125    3890 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:12:41.685157    3890 start.go:687] Will try again in 5 seconds ...
	I0911 04:12:46.687257    3890 start.go:365] acquiring machines lock for custom-flannel-687000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:12:46.687662    3890 start.go:369] acquired machines lock for "custom-flannel-687000" in 307.375µs
	I0911 04:12:46.687782    3890 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.1 ClusterName:custom-flannel-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:12:46.688098    3890 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:12:46.699907    3890 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:12:46.746043    3890 start.go:159] libmachine.API.Create for "custom-flannel-687000" (driver="qemu2")
	I0911 04:12:46.746094    3890 client.go:168] LocalClient.Create starting
	I0911 04:12:46.746224    3890 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:12:46.746291    3890 main.go:141] libmachine: Decoding PEM data...
	I0911 04:12:46.746309    3890 main.go:141] libmachine: Parsing certificate...
	I0911 04:12:46.746386    3890 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:12:46.746426    3890 main.go:141] libmachine: Decoding PEM data...
	I0911 04:12:46.746443    3890 main.go:141] libmachine: Parsing certificate...
	I0911 04:12:46.746948    3890 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:12:46.896667    3890 main.go:141] libmachine: Creating SSH key...
	I0911 04:12:47.133538    3890 main.go:141] libmachine: Creating Disk image...
	I0911 04:12:47.133548    3890 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:12:47.133705    3890 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/custom-flannel-687000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/custom-flannel-687000/disk.qcow2
	I0911 04:12:47.142438    3890 main.go:141] libmachine: STDOUT: 
	I0911 04:12:47.142452    3890 main.go:141] libmachine: STDERR: 
	I0911 04:12:47.142504    3890 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/custom-flannel-687000/disk.qcow2 +20000M
	I0911 04:12:47.149754    3890 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:12:47.149767    3890 main.go:141] libmachine: STDERR: 
	I0911 04:12:47.149781    3890 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/custom-flannel-687000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/custom-flannel-687000/disk.qcow2
	I0911 04:12:47.149788    3890 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:12:47.149836    3890 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/custom-flannel-687000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/custom-flannel-687000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/custom-flannel-687000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:3d:ff:49:a4:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/custom-flannel-687000/disk.qcow2
	I0911 04:12:47.151353    3890 main.go:141] libmachine: STDOUT: 
	I0911 04:12:47.151366    3890 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:12:47.151378    3890 client.go:171] LocalClient.Create took 405.288125ms
	I0911 04:12:49.153471    3890 start.go:128] duration metric: createHost completed in 2.465401917s
	I0911 04:12:49.153543    3890 start.go:83] releasing machines lock for "custom-flannel-687000", held for 2.4659375s
	W0911 04:12:49.153797    3890 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-687000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-687000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:12:49.158499    3890 out.go:177] 
	W0911 04:12:49.162402    3890 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:12:49.162438    3890 out.go:239] * 
	* 
	W0911 04:12:49.163656    3890 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:12:49.173383    3890 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-687000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-687000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.902360042s)

                                                
                                                
-- stdout --
	* [false-687000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-687000 in cluster false-687000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-687000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:12:51.542855    4011 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:12:51.542998    4011 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:12:51.543001    4011 out.go:309] Setting ErrFile to fd 2...
	I0911 04:12:51.543004    4011 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:12:51.543115    4011 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:12:51.544190    4011 out.go:303] Setting JSON to false
	I0911 04:12:51.559212    4011 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2545,"bootTime":1694428226,"procs":416,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:12:51.559288    4011 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:12:51.563729    4011 out.go:177] * [false-687000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:12:51.571922    4011 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:12:51.575790    4011 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:12:51.571968    4011 notify.go:220] Checking for updates...
	I0911 04:12:51.581827    4011 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:12:51.584759    4011 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:12:51.587868    4011 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:12:51.590884    4011 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:12:51.592983    4011 config.go:182] Loaded profile config "multinode-479000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:12:51.593037    4011 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:12:51.596898    4011 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:12:51.603668    4011 start.go:298] selected driver: qemu2
	I0911 04:12:51.603673    4011 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:12:51.603678    4011 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:12:51.605679    4011 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:12:51.608850    4011 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:12:51.611955    4011 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:12:51.611980    4011 cni.go:84] Creating CNI manager for "false"
	I0911 04:12:51.611992    4011 start_flags.go:321] config:
	{Name:false-687000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:false-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0 AutoPauseInterval:1m0s}
	I0911 04:12:51.616145    4011 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:12:51.623817    4011 out.go:177] * Starting control plane node false-687000 in cluster false-687000
	I0911 04:12:51.627878    4011 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:12:51.627896    4011 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:12:51.627910    4011 cache.go:57] Caching tarball of preloaded images
	I0911 04:12:51.627992    4011 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:12:51.627998    4011 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:12:51.628056    4011 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/false-687000/config.json ...
	I0911 04:12:51.628071    4011 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/false-687000/config.json: {Name:mkb4fa5e771a104ed7a12d324acc3519f9a98f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:12:51.628289    4011 start.go:365] acquiring machines lock for false-687000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:12:51.628319    4011 start.go:369] acquired machines lock for "false-687000" in 24.292µs
	I0911 04:12:51.628329    4011 start.go:93] Provisioning new machine with config: &{Name:false-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:false-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:12:51.628357    4011 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:12:51.632848    4011 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:12:51.648538    4011 start.go:159] libmachine.API.Create for "false-687000" (driver="qemu2")
	I0911 04:12:51.648561    4011 client.go:168] LocalClient.Create starting
	I0911 04:12:51.648616    4011 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:12:51.648642    4011 main.go:141] libmachine: Decoding PEM data...
	I0911 04:12:51.648657    4011 main.go:141] libmachine: Parsing certificate...
	I0911 04:12:51.648702    4011 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:12:51.648721    4011 main.go:141] libmachine: Decoding PEM data...
	I0911 04:12:51.648727    4011 main.go:141] libmachine: Parsing certificate...
	I0911 04:12:51.649040    4011 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:12:51.763268    4011 main.go:141] libmachine: Creating SSH key...
	I0911 04:12:51.943506    4011 main.go:141] libmachine: Creating Disk image...
	I0911 04:12:51.943512    4011 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:12:51.943671    4011 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/false-687000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/false-687000/disk.qcow2
	I0911 04:12:51.952300    4011 main.go:141] libmachine: STDOUT: 
	I0911 04:12:51.952316    4011 main.go:141] libmachine: STDERR: 
	I0911 04:12:51.952378    4011 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/false-687000/disk.qcow2 +20000M
	I0911 04:12:51.959502    4011 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:12:51.959516    4011 main.go:141] libmachine: STDERR: 
	I0911 04:12:51.959541    4011 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/false-687000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/false-687000/disk.qcow2
	I0911 04:12:51.959546    4011 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:12:51.959581    4011 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/false-687000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/false-687000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/false-687000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:22:58:ae:11:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/false-687000/disk.qcow2
	I0911 04:12:51.961127    4011 main.go:141] libmachine: STDOUT: 
	I0911 04:12:51.961138    4011 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:12:51.961159    4011 client.go:171] LocalClient.Create took 312.599167ms
	I0911 04:12:53.963254    4011 start.go:128] duration metric: createHost completed in 2.334954584s
	I0911 04:12:53.963313    4011 start.go:83] releasing machines lock for "false-687000", held for 2.335058334s
	W0911 04:12:53.963397    4011 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:12:53.974782    4011 out.go:177] * Deleting "false-687000" in qemu2 ...
	W0911 04:12:53.994584    4011 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:12:53.994616    4011 start.go:687] Will try again in 5 seconds ...
	I0911 04:12:58.996755    4011 start.go:365] acquiring machines lock for false-687000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:12:58.997284    4011 start.go:369] acquired machines lock for "false-687000" in 417.958µs
	I0911 04:12:58.997451    4011 start.go:93] Provisioning new machine with config: &{Name:false-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:false-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:12:58.997768    4011 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:12:59.009671    4011 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:12:59.057131    4011 start.go:159] libmachine.API.Create for "false-687000" (driver="qemu2")
	I0911 04:12:59.057184    4011 client.go:168] LocalClient.Create starting
	I0911 04:12:59.057307    4011 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:12:59.057383    4011 main.go:141] libmachine: Decoding PEM data...
	I0911 04:12:59.057407    4011 main.go:141] libmachine: Parsing certificate...
	I0911 04:12:59.057479    4011 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:12:59.057517    4011 main.go:141] libmachine: Decoding PEM data...
	I0911 04:12:59.057530    4011 main.go:141] libmachine: Parsing certificate...
	I0911 04:12:59.058074    4011 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:12:59.185992    4011 main.go:141] libmachine: Creating SSH key...
	I0911 04:12:59.359519    4011 main.go:141] libmachine: Creating Disk image...
	I0911 04:12:59.359530    4011 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:12:59.359681    4011 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/false-687000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/false-687000/disk.qcow2
	I0911 04:12:59.368519    4011 main.go:141] libmachine: STDOUT: 
	I0911 04:12:59.368538    4011 main.go:141] libmachine: STDERR: 
	I0911 04:12:59.368603    4011 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/false-687000/disk.qcow2 +20000M
	I0911 04:12:59.375977    4011 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:12:59.375989    4011 main.go:141] libmachine: STDERR: 
	I0911 04:12:59.376007    4011 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/false-687000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/false-687000/disk.qcow2
	I0911 04:12:59.376018    4011 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:12:59.376072    4011 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/false-687000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/false-687000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/false-687000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:67:76:a7:1e:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/false-687000/disk.qcow2
	I0911 04:12:59.377594    4011 main.go:141] libmachine: STDOUT: 
	I0911 04:12:59.377605    4011 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:12:59.377618    4011 client.go:171] LocalClient.Create took 320.436834ms
	I0911 04:13:01.379772    4011 start.go:128] duration metric: createHost completed in 2.38203325s
	I0911 04:13:01.379832    4011 start.go:83] releasing machines lock for "false-687000", held for 2.382601834s
	W0911 04:13:01.380251    4011 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-687000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-687000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:13:01.388746    4011 out.go:177] 
	W0911 04:13:01.392922    4011 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:13:01.392949    4011 out.go:239] * 
	* 
	W0911 04:13:01.395566    4011 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:13:01.404915    4011 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-687000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-687000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.70186725s)

                                                
                                                
-- stdout --
	* [enable-default-cni-687000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-687000 in cluster enable-default-cni-687000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-687000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:13:03.591895    4121 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:13:03.592031    4121 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:13:03.592034    4121 out.go:309] Setting ErrFile to fd 2...
	I0911 04:13:03.592037    4121 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:13:03.592152    4121 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:13:03.593180    4121 out.go:303] Setting JSON to false
	I0911 04:13:03.608123    4121 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2557,"bootTime":1694428226,"procs":416,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:13:03.608195    4121 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:13:03.611954    4121 out.go:177] * [enable-default-cni-687000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:13:03.615975    4121 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:13:03.620043    4121 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:13:03.616033    4121 notify.go:220] Checking for updates...
	I0911 04:13:03.623936    4121 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:13:03.626990    4121 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:13:03.629992    4121 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:13:03.632951    4121 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:13:03.636321    4121 config.go:182] Loaded profile config "multinode-479000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:13:03.636368    4121 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:13:03.641025    4121 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:13:03.647911    4121 start.go:298] selected driver: qemu2
	I0911 04:13:03.647916    4121 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:13:03.647922    4121 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:13:03.649806    4121 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:13:03.652980    4121 out.go:177] * Automatically selected the socket_vmnet network
	E0911 04:13:03.655936    4121 start_flags.go:455] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0911 04:13:03.655946    4121 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:13:03.655965    4121 cni.go:84] Creating CNI manager for "bridge"
	I0911 04:13:03.655969    4121 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:13:03.655974    4121 start_flags.go:321] config:
	{Name:enable-default-cni-687000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:enable-default-cni-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:13:03.659834    4121 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:13:03.668007    4121 out.go:177] * Starting control plane node enable-default-cni-687000 in cluster enable-default-cni-687000
	I0911 04:13:03.671927    4121 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:13:03.671954    4121 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:13:03.671971    4121 cache.go:57] Caching tarball of preloaded images
	I0911 04:13:03.672039    4121 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:13:03.672045    4121 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:13:03.672114    4121 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/enable-default-cni-687000/config.json ...
	I0911 04:13:03.672126    4121 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/enable-default-cni-687000/config.json: {Name:mk30dd9b58aa409fa3835968cf5c5bf1281c1358 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:13:03.672315    4121 start.go:365] acquiring machines lock for enable-default-cni-687000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:13:03.672344    4121 start.go:369] acquired machines lock for "enable-default-cni-687000" in 23.458µs
	I0911 04:13:03.672355    4121 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:enable-default-cni-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:13:03.672394    4121 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:13:03.677910    4121 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:13:03.693234    4121 start.go:159] libmachine.API.Create for "enable-default-cni-687000" (driver="qemu2")
	I0911 04:13:03.693253    4121 client.go:168] LocalClient.Create starting
	I0911 04:13:03.693321    4121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:13:03.693354    4121 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:03.693365    4121 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:03.693413    4121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:13:03.693433    4121 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:03.693442    4121 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:03.693757    4121 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:13:03.808479    4121 main.go:141] libmachine: Creating SSH key...
	I0911 04:13:03.917356    4121 main.go:141] libmachine: Creating Disk image...
	I0911 04:13:03.917362    4121 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:13:03.917493    4121 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/enable-default-cni-687000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/enable-default-cni-687000/disk.qcow2
	I0911 04:13:03.926075    4121 main.go:141] libmachine: STDOUT: 
	I0911 04:13:03.926089    4121 main.go:141] libmachine: STDERR: 
	I0911 04:13:03.926135    4121 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/enable-default-cni-687000/disk.qcow2 +20000M
	I0911 04:13:03.933337    4121 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:13:03.933367    4121 main.go:141] libmachine: STDERR: 
	I0911 04:13:03.933387    4121 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/enable-default-cni-687000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/enable-default-cni-687000/disk.qcow2
	I0911 04:13:03.933396    4121 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:13:03.933440    4121 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/enable-default-cni-687000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/enable-default-cni-687000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/enable-default-cni-687000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:2c:d3:9d:65:24 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/enable-default-cni-687000/disk.qcow2
	I0911 04:13:03.934949    4121 main.go:141] libmachine: STDOUT: 
	I0911 04:13:03.934962    4121 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:13:03.934978    4121 client.go:171] LocalClient.Create took 241.725917ms
	I0911 04:13:05.937139    4121 start.go:128] duration metric: createHost completed in 2.264800875s
	I0911 04:13:05.937356    4121 start.go:83] releasing machines lock for "enable-default-cni-687000", held for 2.2650705s
	W0911 04:13:05.937416    4121 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:13:05.945774    4121 out.go:177] * Deleting "enable-default-cni-687000" in qemu2 ...
	W0911 04:13:05.966759    4121 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:13:05.966790    4121 start.go:687] Will try again in 5 seconds ...
	I0911 04:13:10.968899    4121 start.go:365] acquiring machines lock for enable-default-cni-687000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:13:10.969448    4121 start.go:369] acquired machines lock for "enable-default-cni-687000" in 408.208µs
	I0911 04:13:10.969621    4121 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:enable-default-cni-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:13:10.969918    4121 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:13:10.978652    4121 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:13:11.025304    4121 start.go:159] libmachine.API.Create for "enable-default-cni-687000" (driver="qemu2")
	I0911 04:13:11.025345    4121 client.go:168] LocalClient.Create starting
	I0911 04:13:11.025444    4121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:13:11.025498    4121 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:11.025527    4121 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:11.025602    4121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:13:11.025643    4121 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:11.025657    4121 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:11.026154    4121 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:13:11.157641    4121 main.go:141] libmachine: Creating SSH key...
	I0911 04:13:11.205602    4121 main.go:141] libmachine: Creating Disk image...
	I0911 04:13:11.205607    4121 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:13:11.205744    4121 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/enable-default-cni-687000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/enable-default-cni-687000/disk.qcow2
	I0911 04:13:11.214147    4121 main.go:141] libmachine: STDOUT: 
	I0911 04:13:11.214163    4121 main.go:141] libmachine: STDERR: 
	I0911 04:13:11.214218    4121 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/enable-default-cni-687000/disk.qcow2 +20000M
	I0911 04:13:11.221278    4121 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:13:11.221292    4121 main.go:141] libmachine: STDERR: 
	I0911 04:13:11.221304    4121 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/enable-default-cni-687000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/enable-default-cni-687000/disk.qcow2
	I0911 04:13:11.221310    4121 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:13:11.221351    4121 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/enable-default-cni-687000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/enable-default-cni-687000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/enable-default-cni-687000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:a2:9d:08:73:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/enable-default-cni-687000/disk.qcow2
	I0911 04:13:11.222853    4121 main.go:141] libmachine: STDOUT: 
	I0911 04:13:11.222866    4121 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:13:11.222878    4121 client.go:171] LocalClient.Create took 197.535583ms
	I0911 04:13:13.224975    4121 start.go:128] duration metric: createHost completed in 2.255106083s
	I0911 04:13:13.225043    4121 start.go:83] releasing machines lock for "enable-default-cni-687000", held for 2.255640708s
	W0911 04:13:13.225372    4121 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-687000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-687000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:13:13.235056    4121 out.go:177] 
	W0911 04:13:13.239997    4121 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:13:13.240022    4121 out.go:239] * 
	* 
	W0911 04:13:13.242717    4121 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:13:13.252079    4121 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-687000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-687000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.7170645s)

                                                
                                                
-- stdout --
	* [flannel-687000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-687000 in cluster flannel-687000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-687000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:13:15.422895    4231 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:13:15.423035    4231 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:13:15.423038    4231 out.go:309] Setting ErrFile to fd 2...
	I0911 04:13:15.423040    4231 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:13:15.423145    4231 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:13:15.424151    4231 out.go:303] Setting JSON to false
	I0911 04:13:15.439274    4231 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2569,"bootTime":1694428226,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:13:15.439326    4231 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:13:15.444257    4231 out.go:177] * [flannel-687000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:13:15.451425    4231 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:13:15.451500    4231 notify.go:220] Checking for updates...
	I0911 04:13:15.455305    4231 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:13:15.458341    4231 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:13:15.461353    4231 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:13:15.464261    4231 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:13:15.467356    4231 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:13:15.470651    4231 config.go:182] Loaded profile config "multinode-479000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:13:15.470919    4231 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:13:15.474328    4231 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:13:15.481328    4231 start.go:298] selected driver: qemu2
	I0911 04:13:15.481334    4231 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:13:15.481340    4231 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:13:15.483333    4231 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:13:15.484722    4231 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:13:15.488500    4231 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:13:15.488527    4231 cni.go:84] Creating CNI manager for "flannel"
	I0911 04:13:15.488530    4231 start_flags.go:316] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0911 04:13:15.488535    4231 start_flags.go:321] config:
	{Name:flannel-687000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:flannel-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:13:15.492527    4231 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:13:15.500307    4231 out.go:177] * Starting control plane node flannel-687000 in cluster flannel-687000
	I0911 04:13:15.504367    4231 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:13:15.504393    4231 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:13:15.504409    4231 cache.go:57] Caching tarball of preloaded images
	I0911 04:13:15.504464    4231 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:13:15.504470    4231 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:13:15.504536    4231 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/flannel-687000/config.json ...
	I0911 04:13:15.504548    4231 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/flannel-687000/config.json: {Name:mk8fa1bca54d4cb791cf9a67bac7d62ea4b0eba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:13:15.504757    4231 start.go:365] acquiring machines lock for flannel-687000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:13:15.504784    4231 start.go:369] acquired machines lock for "flannel-687000" in 22.542µs
	I0911 04:13:15.504800    4231 start.go:93] Provisioning new machine with config: &{Name:flannel-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:flannel-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:13:15.504830    4231 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:13:15.513323    4231 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:13:15.528377    4231 start.go:159] libmachine.API.Create for "flannel-687000" (driver="qemu2")
	I0911 04:13:15.528403    4231 client.go:168] LocalClient.Create starting
	I0911 04:13:15.528470    4231 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:13:15.528492    4231 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:15.528504    4231 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:15.528546    4231 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:13:15.528563    4231 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:15.528573    4231 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:15.528878    4231 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:13:15.640334    4231 main.go:141] libmachine: Creating SSH key...
	I0911 04:13:15.723093    4231 main.go:141] libmachine: Creating Disk image...
	I0911 04:13:15.723098    4231 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:13:15.723232    4231 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/flannel-687000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/flannel-687000/disk.qcow2
	I0911 04:13:15.731660    4231 main.go:141] libmachine: STDOUT: 
	I0911 04:13:15.731673    4231 main.go:141] libmachine: STDERR: 
	I0911 04:13:15.731738    4231 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/flannel-687000/disk.qcow2 +20000M
	I0911 04:13:15.738946    4231 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:13:15.738959    4231 main.go:141] libmachine: STDERR: 
	I0911 04:13:15.738970    4231 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/flannel-687000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/flannel-687000/disk.qcow2
	I0911 04:13:15.738978    4231 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:13:15.739014    4231 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/flannel-687000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/flannel-687000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/flannel-687000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:f7:4f:ee:88:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/flannel-687000/disk.qcow2
	I0911 04:13:15.740516    4231 main.go:141] libmachine: STDOUT: 
	I0911 04:13:15.740526    4231 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:13:15.740547    4231 client.go:171] LocalClient.Create took 212.144458ms
	I0911 04:13:17.742674    4231 start.go:128] duration metric: createHost completed in 2.237897583s
	I0911 04:13:17.742733    4231 start.go:83] releasing machines lock for "flannel-687000", held for 2.238010792s
	W0911 04:13:17.742829    4231 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:13:17.750518    4231 out.go:177] * Deleting "flannel-687000" in qemu2 ...
	W0911 04:13:17.770381    4231 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:13:17.770415    4231 start.go:687] Will try again in 5 seconds ...
	I0911 04:13:22.772602    4231 start.go:365] acquiring machines lock for flannel-687000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:13:22.773038    4231 start.go:369] acquired machines lock for "flannel-687000" in 347.709µs
	I0911 04:13:22.773174    4231 start.go:93] Provisioning new machine with config: &{Name:flannel-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:flannel-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:13:22.773475    4231 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:13:22.778322    4231 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:13:22.825043    4231 start.go:159] libmachine.API.Create for "flannel-687000" (driver="qemu2")
	I0911 04:13:22.825088    4231 client.go:168] LocalClient.Create starting
	I0911 04:13:22.825225    4231 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:13:22.825301    4231 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:22.825316    4231 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:22.825379    4231 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:13:22.825415    4231 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:22.825427    4231 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:22.825935    4231 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:13:22.953363    4231 main.go:141] libmachine: Creating SSH key...
	I0911 04:13:23.045655    4231 main.go:141] libmachine: Creating Disk image...
	I0911 04:13:23.045660    4231 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:13:23.045798    4231 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/flannel-687000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/flannel-687000/disk.qcow2
	I0911 04:13:23.054660    4231 main.go:141] libmachine: STDOUT: 
	I0911 04:13:23.054673    4231 main.go:141] libmachine: STDERR: 
	I0911 04:13:23.054749    4231 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/flannel-687000/disk.qcow2 +20000M
	I0911 04:13:23.061857    4231 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:13:23.061869    4231 main.go:141] libmachine: STDERR: 
	I0911 04:13:23.061882    4231 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/flannel-687000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/flannel-687000/disk.qcow2
	I0911 04:13:23.061888    4231 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:13:23.061928    4231 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/flannel-687000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/flannel-687000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/flannel-687000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:61:dc:ea:d2:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/flannel-687000/disk.qcow2
	I0911 04:13:23.063483    4231 main.go:141] libmachine: STDOUT: 
	I0911 04:13:23.063495    4231 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:13:23.063506    4231 client.go:171] LocalClient.Create took 238.417917ms
	I0911 04:13:25.065683    4231 start.go:128] duration metric: createHost completed in 2.292253625s
	I0911 04:13:25.065741    4231 start.go:83] releasing machines lock for "flannel-687000", held for 2.292753583s
	W0911 04:13:25.066165    4231 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-687000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-687000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:13:25.076812    4231 out.go:177] 
	W0911 04:13:25.080794    4231 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:13:25.080819    4231 out.go:239] * 
	* 
	W0911 04:13:25.083894    4231 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:13:25.099060    4231 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-687000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-687000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.777309875s)

                                                
                                                
-- stdout --
	* [bridge-687000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-687000 in cluster bridge-687000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-687000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:13:27.445365    4351 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:13:27.445476    4351 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:13:27.445479    4351 out.go:309] Setting ErrFile to fd 2...
	I0911 04:13:27.445482    4351 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:13:27.445588    4351 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:13:27.446616    4351 out.go:303] Setting JSON to false
	I0911 04:13:27.461555    4351 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2581,"bootTime":1694428226,"procs":415,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:13:27.461638    4351 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:13:27.467297    4351 out.go:177] * [bridge-687000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:13:27.475432    4351 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:13:27.475497    4351 notify.go:220] Checking for updates...
	I0911 04:13:27.479328    4351 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:13:27.482327    4351 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:13:27.485394    4351 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:13:27.488279    4351 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:13:27.491341    4351 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:13:27.494715    4351 config.go:182] Loaded profile config "multinode-479000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:13:27.494754    4351 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:13:27.499359    4351 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:13:27.506295    4351 start.go:298] selected driver: qemu2
	I0911 04:13:27.506302    4351 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:13:27.506308    4351 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:13:27.508193    4351 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:13:27.511280    4351 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:13:27.514413    4351 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:13:27.514445    4351 cni.go:84] Creating CNI manager for "bridge"
	I0911 04:13:27.514458    4351 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:13:27.514464    4351 start_flags.go:321] config:
	{Name:bridge-687000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:bridge-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0911 04:13:27.518585    4351 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:13:27.522135    4351 out.go:177] * Starting control plane node bridge-687000 in cluster bridge-687000
	I0911 04:13:27.530368    4351 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:13:27.530405    4351 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:13:27.530421    4351 cache.go:57] Caching tarball of preloaded images
	I0911 04:13:27.530495    4351 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:13:27.530501    4351 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:13:27.530571    4351 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/bridge-687000/config.json ...
	I0911 04:13:27.530584    4351 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/bridge-687000/config.json: {Name:mke71d572a9e0c359a85dd64d22786f87ad10df5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:13:27.530782    4351 start.go:365] acquiring machines lock for bridge-687000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:13:27.530812    4351 start.go:369] acquired machines lock for "bridge-687000" in 24.292µs
	I0911 04:13:27.530823    4351 start.go:93] Provisioning new machine with config: &{Name:bridge-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:bridge-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:13:27.530853    4351 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:13:27.539360    4351 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:13:27.555432    4351 start.go:159] libmachine.API.Create for "bridge-687000" (driver="qemu2")
	I0911 04:13:27.555455    4351 client.go:168] LocalClient.Create starting
	I0911 04:13:27.555517    4351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:13:27.555543    4351 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:27.555552    4351 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:27.555593    4351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:13:27.555612    4351 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:27.555622    4351 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:27.556003    4351 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:13:27.673600    4351 main.go:141] libmachine: Creating SSH key...
	I0911 04:13:27.752997    4351 main.go:141] libmachine: Creating Disk image...
	I0911 04:13:27.753003    4351 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:13:27.753140    4351 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/bridge-687000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/bridge-687000/disk.qcow2
	I0911 04:13:27.761625    4351 main.go:141] libmachine: STDOUT: 
	I0911 04:13:27.761638    4351 main.go:141] libmachine: STDERR: 
	I0911 04:13:27.761695    4351 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/bridge-687000/disk.qcow2 +20000M
	I0911 04:13:27.768780    4351 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:13:27.768801    4351 main.go:141] libmachine: STDERR: 
	I0911 04:13:27.768818    4351 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/bridge-687000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/bridge-687000/disk.qcow2
	I0911 04:13:27.768825    4351 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:13:27.768858    4351 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/bridge-687000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/bridge-687000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/bridge-687000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:89:a2:2b:df:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/bridge-687000/disk.qcow2
	I0911 04:13:27.770376    4351 main.go:141] libmachine: STDOUT: 
	I0911 04:13:27.770397    4351 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:13:27.770420    4351 client.go:171] LocalClient.Create took 214.966375ms
	I0911 04:13:29.772564    4351 start.go:128] duration metric: createHost completed in 2.241761375s
	I0911 04:13:29.772615    4351 start.go:83] releasing machines lock for "bridge-687000", held for 2.241863583s
	W0911 04:13:29.772668    4351 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:13:29.781200    4351 out.go:177] * Deleting "bridge-687000" in qemu2 ...
	W0911 04:13:29.801403    4351 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:13:29.801432    4351 start.go:687] Will try again in 5 seconds ...
	I0911 04:13:34.803572    4351 start.go:365] acquiring machines lock for bridge-687000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:13:34.803990    4351 start.go:369] acquired machines lock for "bridge-687000" in 334.834µs
	I0911 04:13:34.804120    4351 start.go:93] Provisioning new machine with config: &{Name:bridge-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.1 ClusterName:bridge-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:13:34.804391    4351 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:13:34.813184    4351 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:13:34.859929    4351 start.go:159] libmachine.API.Create for "bridge-687000" (driver="qemu2")
	I0911 04:13:34.859969    4351 client.go:168] LocalClient.Create starting
	I0911 04:13:34.860108    4351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:13:34.860159    4351 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:34.860181    4351 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:34.860284    4351 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:13:34.860320    4351 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:34.860332    4351 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:34.860896    4351 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:13:34.991133    4351 main.go:141] libmachine: Creating SSH key...
	I0911 04:13:35.136099    4351 main.go:141] libmachine: Creating Disk image...
	I0911 04:13:35.136106    4351 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:13:35.136268    4351 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/bridge-687000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/bridge-687000/disk.qcow2
	I0911 04:13:35.144985    4351 main.go:141] libmachine: STDOUT: 
	I0911 04:13:35.145000    4351 main.go:141] libmachine: STDERR: 
	I0911 04:13:35.145081    4351 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/bridge-687000/disk.qcow2 +20000M
	I0911 04:13:35.152237    4351 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:13:35.152252    4351 main.go:141] libmachine: STDERR: 
	I0911 04:13:35.152271    4351 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/bridge-687000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/bridge-687000/disk.qcow2
	I0911 04:13:35.152277    4351 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:13:35.152325    4351 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/bridge-687000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/bridge-687000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/bridge-687000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:fc:cc:bb:26:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/bridge-687000/disk.qcow2
	I0911 04:13:35.153916    4351 main.go:141] libmachine: STDOUT: 
	I0911 04:13:35.153928    4351 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:13:35.153940    4351 client.go:171] LocalClient.Create took 293.974916ms
	I0911 04:13:37.156097    4351 start.go:128] duration metric: createHost completed in 2.351749166s
	I0911 04:13:37.156164    4351 start.go:83] releasing machines lock for "bridge-687000", held for 2.352224542s
	W0911 04:13:37.156514    4351 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-687000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-687000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:13:37.167057    4351 out.go:177] 
	W0911 04:13:37.171044    4351 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:13:37.171086    4351 out.go:239] * 
	* 
	W0911 04:13:37.173869    4351 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:13:37.181994    4351 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-687000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-687000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.917964875s)

                                                
                                                
-- stdout --
	* [kubenet-687000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-687000 in cluster kubenet-687000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-687000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:13:39.356053    4461 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:13:39.356179    4461 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:13:39.356182    4461 out.go:309] Setting ErrFile to fd 2...
	I0911 04:13:39.356184    4461 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:13:39.356291    4461 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:13:39.357350    4461 out.go:303] Setting JSON to false
	I0911 04:13:39.372213    4461 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2593,"bootTime":1694428226,"procs":415,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:13:39.372279    4461 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:13:39.377886    4461 out.go:177] * [kubenet-687000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:13:39.385822    4461 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:13:39.388766    4461 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:13:39.385891    4461 notify.go:220] Checking for updates...
	I0911 04:13:39.394793    4461 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:13:39.395955    4461 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:13:39.398819    4461 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:13:39.401786    4461 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:13:39.405059    4461 config.go:182] Loaded profile config "multinode-479000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:13:39.405105    4461 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:13:39.409741    4461 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:13:39.416742    4461 start.go:298] selected driver: qemu2
	I0911 04:13:39.416749    4461 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:13:39.416754    4461 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:13:39.418735    4461 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:13:39.421800    4461 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:13:39.424914    4461 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:13:39.424949    4461 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0911 04:13:39.424953    4461 start_flags.go:321] config:
	{Name:kubenet-687000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:kubenet-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s}
	I0911 04:13:39.429154    4461 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:13:39.437790    4461 out.go:177] * Starting control plane node kubenet-687000 in cluster kubenet-687000
	I0911 04:13:39.441745    4461 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:13:39.441766    4461 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:13:39.441782    4461 cache.go:57] Caching tarball of preloaded images
	I0911 04:13:39.441849    4461 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:13:39.441855    4461 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:13:39.441926    4461 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/kubenet-687000/config.json ...
	I0911 04:13:39.441939    4461 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/kubenet-687000/config.json: {Name:mk4fefc14d650aec5da48fb670ba61c18d7a33b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:13:39.442154    4461 start.go:365] acquiring machines lock for kubenet-687000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:13:39.442183    4461 start.go:369] acquired machines lock for "kubenet-687000" in 23.833µs
	I0911 04:13:39.442194    4461 start.go:93] Provisioning new machine with config: &{Name:kubenet-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kubenet-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:13:39.442230    4461 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:13:39.449770    4461 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:13:39.465178    4461 start.go:159] libmachine.API.Create for "kubenet-687000" (driver="qemu2")
	I0911 04:13:39.465200    4461 client.go:168] LocalClient.Create starting
	I0911 04:13:39.465255    4461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:13:39.465281    4461 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:39.465294    4461 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:39.465335    4461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:13:39.465357    4461 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:39.465366    4461 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:39.465671    4461 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:13:39.580406    4461 main.go:141] libmachine: Creating SSH key...
	I0911 04:13:39.859523    4461 main.go:141] libmachine: Creating Disk image...
	I0911 04:13:39.859532    4461 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:13:39.859772    4461 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubenet-687000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubenet-687000/disk.qcow2
	I0911 04:13:39.868863    4461 main.go:141] libmachine: STDOUT: 
	I0911 04:13:39.868878    4461 main.go:141] libmachine: STDERR: 
	I0911 04:13:39.868946    4461 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubenet-687000/disk.qcow2 +20000M
	I0911 04:13:39.876107    4461 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:13:39.876119    4461 main.go:141] libmachine: STDERR: 
	I0911 04:13:39.876133    4461 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubenet-687000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubenet-687000/disk.qcow2
	I0911 04:13:39.876140    4461 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:13:39.876175    4461 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubenet-687000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubenet-687000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubenet-687000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:34:3e:40:93:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubenet-687000/disk.qcow2
	I0911 04:13:39.877628    4461 main.go:141] libmachine: STDOUT: 
	I0911 04:13:39.877641    4461 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:13:39.877660    4461 client.go:171] LocalClient.Create took 412.467833ms
	I0911 04:13:41.879760    4461 start.go:128] duration metric: createHost completed in 2.437586167s
	I0911 04:13:41.879827    4461 start.go:83] releasing machines lock for "kubenet-687000", held for 2.437710625s
	W0911 04:13:41.879927    4461 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:13:41.891341    4461 out.go:177] * Deleting "kubenet-687000" in qemu2 ...
	W0911 04:13:41.912753    4461 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:13:41.912784    4461 start.go:687] Will try again in 5 seconds ...
	I0911 04:13:46.914848    4461 start.go:365] acquiring machines lock for kubenet-687000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:13:46.915355    4461 start.go:369] acquired machines lock for "kubenet-687000" in 395.792µs
	I0911 04:13:46.915482    4461 start.go:93] Provisioning new machine with config: &{Name:kubenet-687000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:kubenet-687000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:13:46.915813    4461 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:13:46.927504    4461 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 04:13:46.973262    4461 start.go:159] libmachine.API.Create for "kubenet-687000" (driver="qemu2")
	I0911 04:13:46.973311    4461 client.go:168] LocalClient.Create starting
	I0911 04:13:46.973452    4461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:13:46.973510    4461 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:46.973531    4461 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:46.973614    4461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:13:46.973657    4461 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:46.973675    4461 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:46.974243    4461 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:13:47.102264    4461 main.go:141] libmachine: Creating SSH key...
	I0911 04:13:47.186115    4461 main.go:141] libmachine: Creating Disk image...
	I0911 04:13:47.186123    4461 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:13:47.186263    4461 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubenet-687000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubenet-687000/disk.qcow2
	I0911 04:13:47.194760    4461 main.go:141] libmachine: STDOUT: 
	I0911 04:13:47.194775    4461 main.go:141] libmachine: STDERR: 
	I0911 04:13:47.194830    4461 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubenet-687000/disk.qcow2 +20000M
	I0911 04:13:47.201963    4461 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:13:47.201974    4461 main.go:141] libmachine: STDERR: 
	I0911 04:13:47.201990    4461 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubenet-687000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubenet-687000/disk.qcow2
	I0911 04:13:47.201997    4461 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:13:47.202045    4461 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubenet-687000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubenet-687000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubenet-687000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:52:41:be:cf:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/kubenet-687000/disk.qcow2
	I0911 04:13:47.203523    4461 main.go:141] libmachine: STDOUT: 
	I0911 04:13:47.203535    4461 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:13:47.203551    4461 client.go:171] LocalClient.Create took 230.239709ms
	I0911 04:13:49.205715    4461 start.go:128] duration metric: createHost completed in 2.289936584s
	I0911 04:13:49.205795    4461 start.go:83] releasing machines lock for "kubenet-687000", held for 2.290486458s
	W0911 04:13:49.206227    4461 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-687000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-687000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:13:49.217039    4461 out.go:177] 
	W0911 04:13:49.221040    4461 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:13:49.221077    4461 out.go:239] * 
	* 
	W0911 04:13:49.223498    4461 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:13:49.232973    4461 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-327000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-327000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (9.826882083s)

                                                
                                                
-- stdout --
	* [old-k8s-version-327000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-327000 in cluster old-k8s-version-327000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-327000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:13:51.398054    4571 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:13:51.398175    4571 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:13:51.398179    4571 out.go:309] Setting ErrFile to fd 2...
	I0911 04:13:51.398181    4571 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:13:51.398281    4571 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:13:51.399269    4571 out.go:303] Setting JSON to false
	I0911 04:13:51.414202    4571 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2605,"bootTime":1694428226,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:13:51.414274    4571 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:13:51.421482    4571 out.go:177] * [old-k8s-version-327000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:13:51.425526    4571 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:13:51.429480    4571 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:13:51.425591    4571 notify.go:220] Checking for updates...
	I0911 04:13:51.433464    4571 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:13:51.436522    4571 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:13:51.439465    4571 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:13:51.442462    4571 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:13:51.445854    4571 config.go:182] Loaded profile config "multinode-479000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:13:51.445894    4571 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:13:51.450481    4571 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:13:51.457451    4571 start.go:298] selected driver: qemu2
	I0911 04:13:51.457457    4571 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:13:51.457462    4571 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:13:51.459218    4571 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:13:51.462505    4571 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:13:51.465458    4571 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:13:51.465479    4571 cni.go:84] Creating CNI manager for ""
	I0911 04:13:51.465485    4571 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0911 04:13:51.465491    4571 start_flags.go:321] config:
	{Name:old-k8s-version-327000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-327000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:13:51.469324    4571 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:13:51.476391    4571 out.go:177] * Starting control plane node old-k8s-version-327000 in cluster old-k8s-version-327000
	I0911 04:13:51.480493    4571 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0911 04:13:51.480511    4571 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0911 04:13:51.480531    4571 cache.go:57] Caching tarball of preloaded images
	I0911 04:13:51.480630    4571 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:13:51.480640    4571 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0911 04:13:51.480706    4571 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/old-k8s-version-327000/config.json ...
	I0911 04:13:51.480718    4571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/old-k8s-version-327000/config.json: {Name:mk619140092e3f2ca2f0795414658f4d9fce75f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:13:51.480927    4571 start.go:365] acquiring machines lock for old-k8s-version-327000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:13:51.480956    4571 start.go:369] acquired machines lock for "old-k8s-version-327000" in 22.5µs
	I0911 04:13:51.480967    4571 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-327000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-327000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:13:51.480994    4571 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:13:51.489492    4571 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:13:51.504086    4571 start.go:159] libmachine.API.Create for "old-k8s-version-327000" (driver="qemu2")
	I0911 04:13:51.504105    4571 client.go:168] LocalClient.Create starting
	I0911 04:13:51.504158    4571 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:13:51.504181    4571 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:51.504191    4571 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:51.504226    4571 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:13:51.504246    4571 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:51.504253    4571 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:51.504557    4571 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:13:51.623080    4571 main.go:141] libmachine: Creating SSH key...
	I0911 04:13:51.684251    4571 main.go:141] libmachine: Creating Disk image...
	I0911 04:13:51.684262    4571 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:13:51.684405    4571 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2
	I0911 04:13:51.692776    4571 main.go:141] libmachine: STDOUT: 
	I0911 04:13:51.692791    4571 main.go:141] libmachine: STDERR: 
	I0911 04:13:51.692838    4571 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2 +20000M
	I0911 04:13:51.700009    4571 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:13:51.700030    4571 main.go:141] libmachine: STDERR: 
	I0911 04:13:51.700054    4571 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2
	I0911 04:13:51.700065    4571 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:13:51.700117    4571 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:d7:e4:99:31:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2
	I0911 04:13:51.701680    4571 main.go:141] libmachine: STDOUT: 
	I0911 04:13:51.701696    4571 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:13:51.701715    4571 client.go:171] LocalClient.Create took 197.610459ms
	I0911 04:13:53.703810    4571 start.go:128] duration metric: createHost completed in 2.222870583s
	I0911 04:13:53.703870    4571 start.go:83] releasing machines lock for "old-k8s-version-327000", held for 2.222974917s
	W0911 04:13:53.703968    4571 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:13:53.711401    4571 out.go:177] * Deleting "old-k8s-version-327000" in qemu2 ...
	W0911 04:13:53.732343    4571 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:13:53.732370    4571 start.go:687] Will try again in 5 seconds ...
	I0911 04:13:58.734512    4571 start.go:365] acquiring machines lock for old-k8s-version-327000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:13:58.735012    4571 start.go:369] acquired machines lock for "old-k8s-version-327000" in 387.041µs
	I0911 04:13:58.735126    4571 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-327000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-327000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:13:58.735423    4571 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:13:58.744072    4571 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:13:58.791157    4571 start.go:159] libmachine.API.Create for "old-k8s-version-327000" (driver="qemu2")
	I0911 04:13:58.791202    4571 client.go:168] LocalClient.Create starting
	I0911 04:13:58.791368    4571 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:13:58.791428    4571 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:58.791447    4571 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:58.791520    4571 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:13:58.791557    4571 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:58.791568    4571 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:58.792087    4571 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:13:58.916664    4571 main.go:141] libmachine: Creating SSH key...
	I0911 04:13:59.139472    4571 main.go:141] libmachine: Creating Disk image...
	I0911 04:13:59.139480    4571 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:13:59.139660    4571 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2
	I0911 04:13:59.148543    4571 main.go:141] libmachine: STDOUT: 
	I0911 04:13:59.148558    4571 main.go:141] libmachine: STDERR: 
	I0911 04:13:59.148629    4571 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2 +20000M
	I0911 04:13:59.155891    4571 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:13:59.155905    4571 main.go:141] libmachine: STDERR: 
	I0911 04:13:59.155920    4571 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2
	I0911 04:13:59.155926    4571 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:13:59.155957    4571 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:f2:44:20:e0:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2
	I0911 04:13:59.157513    4571 main.go:141] libmachine: STDOUT: 
	I0911 04:13:59.157526    4571 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:13:59.157542    4571 client.go:171] LocalClient.Create took 366.342292ms
	I0911 04:14:01.159641    4571 start.go:128] duration metric: createHost completed in 2.424269583s
	I0911 04:14:01.159720    4571 start.go:83] releasing machines lock for "old-k8s-version-327000", held for 2.424761958s
	W0911 04:14:01.160134    4571 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-327000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-327000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:14:01.169758    4571 out.go:177] 
	W0911 04:14:01.174731    4571 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:14:01.174764    4571 out.go:239] * 
	* 
	W0911 04:14:01.177164    4571 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:14:01.184757    4571 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-327000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-327000 -n old-k8s-version-327000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-327000 -n old-k8s-version-327000: exit status 7 (68.330917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-327000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.90s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (2.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3370002931.exe start -p stopped-upgrade-319000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3370002931.exe start -p stopped-upgrade-319000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3370002931.exe: permission denied (1.457959ms)
version_upgrade_test.go:196: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3370002931.exe start -p stopped-upgrade-319000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3370002931.exe start -p stopped-upgrade-319000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3370002931.exe: permission denied (5.53175ms)
version_upgrade_test.go:196: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3370002931.exe start -p stopped-upgrade-319000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3370002931.exe start -p stopped-upgrade-319000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3370002931.exe: permission denied (1.252ms)
version_upgrade_test.go:202: legacy v1.6.2 start failed: fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3370002931.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (2.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-327000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-327000 create -f testdata/busybox.yaml: exit status 1 (29.032917ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-327000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-327000 -n old-k8s-version-327000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-327000 -n old-k8s-version-327000: exit status 7 (30.184333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-327000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-327000 -n old-k8s-version-327000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-327000 -n old-k8s-version-327000: exit status 7 (29.133834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-327000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-327000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-327000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-327000 describe deploy/metrics-server -n kube-system: exit status 1 (26.119292ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-327000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-327000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-327000 -n old-k8s-version-327000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-327000 -n old-k8s-version-327000: exit status 7 (29.627666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-327000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-319000
version_upgrade_test.go:219: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-319000: exit status 85 (74.728084ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-687000 sudo cat                              | bridge-687000          | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-687000 sudo cat                              | bridge-687000          | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-687000 sudo                                  | bridge-687000          | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | systemctl status docker --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p bridge-687000 sudo                                  | bridge-687000          | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | systemctl cat docker                                   |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p bridge-687000 sudo cat                              | bridge-687000          | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | /etc/docker/daemon.json                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-687000 sudo docker                           | bridge-687000          | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | system info                                            |                        |         |         |                     |                     |
	| ssh     | -p bridge-687000 sudo                                  | bridge-687000          | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | systemctl status cri-docker                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-687000 sudo                                  | bridge-687000          | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | systemctl cat cri-docker                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p bridge-687000 sudo cat                              | bridge-687000          | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf   |                        |         |         |                     |                     |
	| ssh     | -p bridge-687000 sudo cat                              | bridge-687000          | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service             |                        |         |         |                     |                     |
	| ssh     | -p bridge-687000 sudo                                  | bridge-687000          | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | cri-dockerd --version                                  |                        |         |         |                     |                     |
	| ssh     | -p bridge-687000 sudo                                  | bridge-687000          | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | systemctl status containerd                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-687000 sudo                                  | bridge-687000          | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | systemctl cat containerd                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p bridge-687000 sudo cat                              | bridge-687000          | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | /lib/systemd/system/containerd.service                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-687000 sudo cat                              | bridge-687000          | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | /etc/containerd/config.toml                            |                        |         |         |                     |                     |
	| ssh     | -p bridge-687000 sudo                                  | bridge-687000          | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-687000 sudo                                  | bridge-687000          | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p bridge-687000 sudo                                  | bridge-687000          | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-687000 sudo find                             | bridge-687000          | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p bridge-687000 sudo crio                             | bridge-687000          | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p bridge-687000                                       | bridge-687000          | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT | 11 Sep 23 04:13 PDT |
	| start   | -p kubenet-687000                                      | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | --memory=3072                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                        |         |         |                     |                     |
	|         | --network-plugin=kubenet                               |                        |         |         |                     |                     |
	|         | --driver=qemu2                                         |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo cat                             | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | /etc/nsswitch.conf                                     |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo cat                             | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | /etc/hosts                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo cat                             | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | /etc/resolv.conf                                       |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo crictl                          | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | pods                                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo crictl                          | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | ps --all                                               |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo find                            | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | /etc/cni -type f -exec sh -c                           |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo ip a s                          | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	| ssh     | -p kubenet-687000 sudo ip r s                          | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	| ssh     | -p kubenet-687000 sudo                                 | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | iptables-save                                          |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo                                 | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | iptables -t nat -L -n -v                               |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo                                 | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | systemctl status kubelet --all                         |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo                                 | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | systemctl cat kubelet                                  |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo                                 | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | journalctl -xeu kubelet --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo cat                             | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo cat                             | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo                                 | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | systemctl status docker --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo                                 | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | systemctl cat docker                                   |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo cat                             | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | /etc/docker/daemon.json                                |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo docker                          | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | system info                                            |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo                                 | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | systemctl status cri-docker                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo                                 | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | systemctl cat cri-docker                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo cat                             | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo cat                             | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo                                 | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | cri-dockerd --version                                  |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo                                 | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | systemctl status containerd                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo                                 | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | systemctl cat containerd                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo cat                             | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | /lib/systemd/system/containerd.service                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo cat                             | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | /etc/containerd/config.toml                            |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo                                 | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo                                 | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo                                 | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo find                            | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-687000 sudo crio                            | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p kubenet-687000                                      | kubenet-687000         | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT | 11 Sep 23 04:13 PDT |
	| start   | -p old-k8s-version-327000                              | old-k8s-version-327000 | jenkins | v1.31.2 | 11 Sep 23 04:13 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=qemu2                                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-327000        | old-k8s-version-327000 | jenkins | v1.31.2 | 11 Sep 23 04:14 PDT | 11 Sep 23 04:14 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-327000                              | old-k8s-version-327000 | jenkins | v1.31.2 | 11 Sep 23 04:14 PDT | 11 Sep 23 04:14 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-327000             | old-k8s-version-327000 | jenkins | v1.31.2 | 11 Sep 23 04:14 PDT | 11 Sep 23 04:14 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 04:13:51
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 04:13:51.398054    4571 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:13:51.398175    4571 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:13:51.398179    4571 out.go:309] Setting ErrFile to fd 2...
	I0911 04:13:51.398181    4571 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:13:51.398281    4571 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:13:51.399269    4571 out.go:303] Setting JSON to false
	I0911 04:13:51.414202    4571 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2605,"bootTime":1694428226,"procs":413,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:13:51.414274    4571 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:13:51.421482    4571 out.go:177] * [old-k8s-version-327000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:13:51.425526    4571 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:13:51.429480    4571 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:13:51.425591    4571 notify.go:220] Checking for updates...
	I0911 04:13:51.433464    4571 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:13:51.436522    4571 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:13:51.439465    4571 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:13:51.442462    4571 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:13:51.445854    4571 config.go:182] Loaded profile config "multinode-479000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:13:51.445894    4571 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:13:51.450481    4571 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:13:51.457451    4571 start.go:298] selected driver: qemu2
	I0911 04:13:51.457457    4571 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:13:51.457462    4571 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:13:51.459218    4571 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:13:51.462505    4571 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:13:51.465458    4571 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:13:51.465479    4571 cni.go:84] Creating CNI manager for ""
	I0911 04:13:51.465485    4571 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0911 04:13:51.465491    4571 start_flags.go:321] config:
	{Name:old-k8s-version-327000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-327000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:13:51.469324    4571 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:13:51.476391    4571 out.go:177] * Starting control plane node old-k8s-version-327000 in cluster old-k8s-version-327000
	I0911 04:13:51.480493    4571 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0911 04:13:51.480511    4571 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0911 04:13:51.480531    4571 cache.go:57] Caching tarball of preloaded images
	I0911 04:13:51.480630    4571 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:13:51.480640    4571 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0911 04:13:51.480706    4571 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/old-k8s-version-327000/config.json ...
	I0911 04:13:51.480718    4571 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/old-k8s-version-327000/config.json: {Name:mk619140092e3f2ca2f0795414658f4d9fce75f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:13:51.480927    4571 start.go:365] acquiring machines lock for old-k8s-version-327000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:13:51.480956    4571 start.go:369] acquired machines lock for "old-k8s-version-327000" in 22.5µs
	I0911 04:13:51.480967    4571 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-327000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-327000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:13:51.480994    4571 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:13:51.489492    4571 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:13:51.504086    4571 start.go:159] libmachine.API.Create for "old-k8s-version-327000" (driver="qemu2")
	I0911 04:13:51.504105    4571 client.go:168] LocalClient.Create starting
	I0911 04:13:51.504158    4571 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:13:51.504181    4571 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:51.504191    4571 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:51.504226    4571 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:13:51.504246    4571 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:51.504253    4571 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:51.504557    4571 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:13:51.623080    4571 main.go:141] libmachine: Creating SSH key...
	I0911 04:13:51.684251    4571 main.go:141] libmachine: Creating Disk image...
	I0911 04:13:51.684262    4571 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:13:51.684405    4571 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2
	I0911 04:13:51.692776    4571 main.go:141] libmachine: STDOUT: 
	I0911 04:13:51.692791    4571 main.go:141] libmachine: STDERR: 
	I0911 04:13:51.692838    4571 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2 +20000M
	I0911 04:13:51.700009    4571 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:13:51.700030    4571 main.go:141] libmachine: STDERR: 
	I0911 04:13:51.700054    4571 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2
	I0911 04:13:51.700065    4571 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:13:51.700117    4571 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:d7:e4:99:31:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2
	I0911 04:13:51.701680    4571 main.go:141] libmachine: STDOUT: 
	I0911 04:13:51.701696    4571 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:13:51.701715    4571 client.go:171] LocalClient.Create took 197.610459ms
	I0911 04:13:53.703810    4571 start.go:128] duration metric: createHost completed in 2.222870583s
	I0911 04:13:53.703870    4571 start.go:83] releasing machines lock for "old-k8s-version-327000", held for 2.222974917s
	W0911 04:13:53.703968    4571 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:13:53.711401    4571 out.go:177] * Deleting "old-k8s-version-327000" in qemu2 ...
	W0911 04:13:53.732343    4571 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:13:53.732370    4571 start.go:687] Will try again in 5 seconds ...
	I0911 04:13:58.734512    4571 start.go:365] acquiring machines lock for old-k8s-version-327000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:13:58.735012    4571 start.go:369] acquired machines lock for "old-k8s-version-327000" in 387.041µs
	I0911 04:13:58.735126    4571 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-327000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-327000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:13:58.735423    4571 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:13:58.744072    4571 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:13:58.791157    4571 start.go:159] libmachine.API.Create for "old-k8s-version-327000" (driver="qemu2")
	I0911 04:13:58.791202    4571 client.go:168] LocalClient.Create starting
	I0911 04:13:58.791368    4571 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:13:58.791428    4571 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:58.791447    4571 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:58.791520    4571 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:13:58.791557    4571 main.go:141] libmachine: Decoding PEM data...
	I0911 04:13:58.791568    4571 main.go:141] libmachine: Parsing certificate...
	I0911 04:13:58.792087    4571 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:13:58.916664    4571 main.go:141] libmachine: Creating SSH key...
	I0911 04:13:59.139472    4571 main.go:141] libmachine: Creating Disk image...
	I0911 04:13:59.139480    4571 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:13:59.139660    4571 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2
	I0911 04:13:59.148543    4571 main.go:141] libmachine: STDOUT: 
	I0911 04:13:59.148558    4571 main.go:141] libmachine: STDERR: 
	I0911 04:13:59.148629    4571 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2 +20000M
	I0911 04:13:59.155891    4571 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:13:59.155905    4571 main.go:141] libmachine: STDERR: 
	I0911 04:13:59.155920    4571 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2
	I0911 04:13:59.155926    4571 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:13:59.155957    4571 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:f2:44:20:e0:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2
	I0911 04:13:59.157513    4571 main.go:141] libmachine: STDOUT: 
	I0911 04:13:59.157526    4571 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:13:59.157542    4571 client.go:171] LocalClient.Create took 366.342292ms
	I0911 04:14:01.159641    4571 start.go:128] duration metric: createHost completed in 2.424269583s
	I0911 04:14:01.159720    4571 start.go:83] releasing machines lock for "old-k8s-version-327000", held for 2.424761958s
	W0911 04:14:01.160134    4571 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-327000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:14:01.169758    4571 out.go:177] 
	W0911 04:14:01.174731    4571 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:14:01.174764    4571 out.go:239] * 
	W0911 04:14:01.177164    4571 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰───────────────────────────────────────��
	
	* 
	* Profile "stopped-upgrade-319000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-319000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:221: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-327000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-327000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (5.221781708s)

                                                
                                                
-- stdout --
	* [old-k8s-version-327000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-327000 in cluster old-k8s-version-327000
	* Restarting existing qemu2 VM for "old-k8s-version-327000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-327000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:14:01.649481    4608 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:14:01.649725    4608 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:01.649729    4608 out.go:309] Setting ErrFile to fd 2...
	I0911 04:14:01.649731    4608 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:01.649970    4608 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:14:01.653108    4608 out.go:303] Setting JSON to false
	I0911 04:14:01.668562    4608 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2615,"bootTime":1694428226,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:14:01.668616    4608 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:14:01.678825    4608 out.go:177] * [old-k8s-version-327000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:14:01.686775    4608 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:14:01.682946    4608 notify.go:220] Checking for updates...
	I0911 04:14:01.693799    4608 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:14:01.700848    4608 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:14:01.707787    4608 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:14:01.713758    4608 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:14:01.724776    4608 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:14:01.732084    4608 config.go:182] Loaded profile config "old-k8s-version-327000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0911 04:14:01.737836    4608 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0911 04:14:01.740862    4608 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:14:01.744780    4608 out.go:177] * Using the qemu2 driver based on existing profile
	I0911 04:14:01.751832    4608 start.go:298] selected driver: qemu2
	I0911 04:14:01.751840    4608 start.go:902] validating driver "qemu2" against &{Name:old-k8s-version-327000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-327000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:14:01.751901    4608 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:14:01.755307    4608 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:14:01.755336    4608 cni.go:84] Creating CNI manager for ""
	I0911 04:14:01.755342    4608 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0911 04:14:01.755347    4608 start_flags.go:321] config:
	{Name:old-k8s-version-327000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-327000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:14:01.759621    4608 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:01.767871    4608 out.go:177] * Starting control plane node old-k8s-version-327000 in cluster old-k8s-version-327000
	I0911 04:14:01.770820    4608 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0911 04:14:01.770839    4608 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0911 04:14:01.770855    4608 cache.go:57] Caching tarball of preloaded images
	I0911 04:14:01.770916    4608 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:14:01.770921    4608 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0911 04:14:01.770980    4608 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/old-k8s-version-327000/config.json ...
	I0911 04:14:01.771295    4608 start.go:365] acquiring machines lock for old-k8s-version-327000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:14:01.771322    4608 start.go:369] acquired machines lock for "old-k8s-version-327000" in 19.584µs
	I0911 04:14:01.771333    4608 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:14:01.771337    4608 fix.go:54] fixHost starting: 
	I0911 04:14:01.771448    4608 fix.go:102] recreateIfNeeded on old-k8s-version-327000: state=Stopped err=<nil>
	W0911 04:14:01.771456    4608 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:14:01.775798    4608 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-327000" ...
	I0911 04:14:01.783927    4608 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:f2:44:20:e0:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2
	I0911 04:14:01.786312    4608 main.go:141] libmachine: STDOUT: 
	I0911 04:14:01.786327    4608 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:14:01.786356    4608 fix.go:56] fixHost completed within 15.017834ms
	I0911 04:14:01.786361    4608 start.go:83] releasing machines lock for "old-k8s-version-327000", held for 15.035208ms
	W0911 04:14:01.786369    4608 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:14:01.786419    4608 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:14:01.786423    4608 start.go:687] Will try again in 5 seconds ...
	I0911 04:14:06.786568    4608 start.go:365] acquiring machines lock for old-k8s-version-327000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:14:06.787047    4608 start.go:369] acquired machines lock for "old-k8s-version-327000" in 399.042µs
	I0911 04:14:06.787168    4608 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:14:06.787191    4608 fix.go:54] fixHost starting: 
	I0911 04:14:06.788085    4608 fix.go:102] recreateIfNeeded on old-k8s-version-327000: state=Stopped err=<nil>
	W0911 04:14:06.788114    4608 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:14:06.797593    4608 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-327000" ...
	I0911 04:14:06.800786    4608 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:f2:44:20:e0:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/old-k8s-version-327000/disk.qcow2
	I0911 04:14:06.810088    4608 main.go:141] libmachine: STDOUT: 
	I0911 04:14:06.810154    4608 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:14:06.810259    4608 fix.go:56] fixHost completed within 23.07025ms
	I0911 04:14:06.810288    4608 start.go:83] releasing machines lock for "old-k8s-version-327000", held for 23.218209ms
	W0911 04:14:06.810512    4608 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-327000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-327000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:14:06.818594    4608 out.go:177] 
	W0911 04:14:06.822636    4608 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:14:06.822675    4608 out.go:239] * 
	* 
	W0911 04:14:06.825219    4608 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:14:06.833564    4608 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-327000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-327000 -n old-k8s-version-327000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-327000 -n old-k8s-version-327000: exit status 7 (68.216833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-327000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-616000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-616000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (10.405684625s)

                                                
                                                
-- stdout --
	* [no-preload-616000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-616000 in cluster no-preload-616000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-616000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:14:01.997706    4629 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:14:01.997843    4629 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:01.997846    4629 out.go:309] Setting ErrFile to fd 2...
	I0911 04:14:01.997848    4629 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:01.997960    4629 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:14:01.998964    4629 out.go:303] Setting JSON to false
	I0911 04:14:02.013995    4629 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2616,"bootTime":1694428226,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:14:02.014048    4629 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:14:02.019026    4629 out.go:177] * [no-preload-616000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:14:02.026001    4629 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:14:02.026037    4629 notify.go:220] Checking for updates...
	I0911 04:14:02.030038    4629 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:14:02.033884    4629 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:14:02.037002    4629 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:14:02.039996    4629 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:14:02.042898    4629 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:14:02.046322    4629 config.go:182] Loaded profile config "multinode-479000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:14:02.046401    4629 config.go:182] Loaded profile config "old-k8s-version-327000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0911 04:14:02.046437    4629 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:14:02.051025    4629 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:14:02.057943    4629 start.go:298] selected driver: qemu2
	I0911 04:14:02.057951    4629 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:14:02.057958    4629 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:14:02.060008    4629 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:14:02.062992    4629 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:14:02.066002    4629 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:14:02.066040    4629 cni.go:84] Creating CNI manager for ""
	I0911 04:14:02.066047    4629 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:14:02.066052    4629 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:14:02.066058    4629 start_flags.go:321] config:
	{Name:no-preload-616000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-616000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSH
AgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:14:02.070076    4629 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:02.077790    4629 out.go:177] * Starting control plane node no-preload-616000 in cluster no-preload-616000
	I0911 04:14:02.081910    4629 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:14:02.081987    4629 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/no-preload-616000/config.json ...
	I0911 04:14:02.082008    4629 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/no-preload-616000/config.json: {Name:mk205d9e57bab500f7a4d9fd6518e8f8e4f3e6f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:14:02.082014    4629 cache.go:107] acquiring lock: {Name:mk8369bcdd9b846fad76d05d4bf65b5c2f784223 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:02.082015    4629 cache.go:107] acquiring lock: {Name:mk0937927e55a208b56fb5051dfed2c2c0dac040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:02.082035    4629 cache.go:107] acquiring lock: {Name:mk906eee170c23ef85d73de7c571c478a874591c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:02.082081    4629 cache.go:115] /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0911 04:14:02.082089    4629 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 76.209µs
	I0911 04:14:02.082095    4629 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0911 04:14:02.082104    4629 cache.go:107] acquiring lock: {Name:mka3f37d209756a9d64805bb97a78cbfa8750a77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:02.082172    4629 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0911 04:14:02.082189    4629 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0911 04:14:02.082230    4629 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0911 04:14:02.082256    4629 start.go:365] acquiring machines lock for no-preload-616000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:14:02.082281    4629 start.go:369] acquired machines lock for "no-preload-616000" in 20.458µs
	I0911 04:14:02.082258    4629 cache.go:107] acquiring lock: {Name:mkd718ae9faa29e8b5e7d6aa56fce9d7de898249 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:02.082310    4629 cache.go:107] acquiring lock: {Name:mk4427db8803bd54ebc0df896aa4bf3d5497ea78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:02.082292    4629 start.go:93] Provisioning new machine with config: &{Name:no-preload-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-616000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:14:02.082330    4629 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:14:02.082337    4629 cache.go:107] acquiring lock: {Name:mke4ac6cc3b1650eb885729ecbc99df67986526c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:02.086865    4629 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:14:02.082386    4629 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 04:14:02.082439    4629 cache.go:107] acquiring lock: {Name:mk721585c00738463f6a0d718feec4b5affe3b15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:02.082464    4629 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0911 04:14:02.082480    4629 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0911 04:14:02.087485    4629 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0911 04:14:02.094207    4629 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0911 04:14:02.094331    4629 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0911 04:14:02.094929    4629 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0911 04:14:02.097968    4629 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0911 04:14:02.098188    4629 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0911 04:14:02.098505    4629 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0911 04:14:02.098550    4629 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 04:14:02.102740    4629 start.go:159] libmachine.API.Create for "no-preload-616000" (driver="qemu2")
	I0911 04:14:02.102757    4629 client.go:168] LocalClient.Create starting
	I0911 04:14:02.102821    4629 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:14:02.102842    4629 main.go:141] libmachine: Decoding PEM data...
	I0911 04:14:02.102854    4629 main.go:141] libmachine: Parsing certificate...
	I0911 04:14:02.102893    4629 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:14:02.102908    4629 main.go:141] libmachine: Decoding PEM data...
	I0911 04:14:02.102915    4629 main.go:141] libmachine: Parsing certificate...
	I0911 04:14:02.103270    4629 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:14:02.220513    4629 main.go:141] libmachine: Creating SSH key...
	I0911 04:14:02.285820    4629 main.go:141] libmachine: Creating Disk image...
	I0911 04:14:02.285830    4629 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:14:02.285996    4629 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/disk.qcow2
	I0911 04:14:02.295360    4629 main.go:141] libmachine: STDOUT: 
	I0911 04:14:02.295404    4629 main.go:141] libmachine: STDERR: 
	I0911 04:14:02.295479    4629 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/disk.qcow2 +20000M
	I0911 04:14:02.303379    4629 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:14:02.303392    4629 main.go:141] libmachine: STDERR: 
	I0911 04:14:02.303420    4629 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/disk.qcow2
	I0911 04:14:02.303426    4629 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:14:02.303496    4629 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:ed:bb:85:b3:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/disk.qcow2
	I0911 04:14:02.305093    4629 main.go:141] libmachine: STDOUT: 
	I0911 04:14:02.305119    4629 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:14:02.305136    4629 client.go:171] LocalClient.Create took 202.378916ms
	I0911 04:14:02.679004    4629 cache.go:162] opening:  /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1
	I0911 04:14:02.731315    4629 cache.go:162] opening:  /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1
	I0911 04:14:02.926045    4629 cache.go:162] opening:  /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1
	I0911 04:14:03.126524    4629 cache.go:162] opening:  /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0911 04:14:03.356801    4629 cache.go:162] opening:  /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0
	I0911 04:14:03.546167    4629 cache.go:162] opening:  /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0911 04:14:03.681273    4629 cache.go:157] /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0911 04:14:03.681293    4629 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.599093958s
	I0911 04:14:03.681303    4629 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0911 04:14:03.776121    4629 cache.go:162] opening:  /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0911 04:14:04.305276    4629 start.go:128] duration metric: createHost completed in 2.222977084s
	I0911 04:14:04.305318    4629 start.go:83] releasing machines lock for "no-preload-616000", held for 2.22309875s
	W0911 04:14:04.305379    4629 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:14:04.317407    4629 out.go:177] * Deleting "no-preload-616000" in qemu2 ...
	W0911 04:14:04.337562    4629 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:14:04.337601    4629 start.go:687] Will try again in 5 seconds ...
	I0911 04:14:05.395775    4629 cache.go:157] /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0911 04:14:05.395843    4629 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 3.313671375s
	I0911 04:14:05.395877    4629 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0911 04:14:05.752574    4629 cache.go:157] /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 exists
	I0911 04:14:05.752625    4629 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.1" -> "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1" took 3.670721208s
	I0911 04:14:05.752693    4629 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.1 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 succeeded
	I0911 04:14:05.792341    4629 cache.go:157] /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 exists
	I0911 04:14:05.792380    4629 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.1" -> "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1" took 3.710388584s
	I0911 04:14:05.792405    4629 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.1 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 succeeded
	I0911 04:14:06.586553    4629 cache.go:157] /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 exists
	I0911 04:14:06.586606    4629 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.1" -> "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1" took 4.504731791s
	I0911 04:14:06.586632    4629 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.1 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 succeeded
	I0911 04:14:08.015905    4629 cache.go:157] /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 exists
	I0911 04:14:08.015913    4629 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.1" -> "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1" took 5.933923542s
	I0911 04:14:08.015919    4629 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.1 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 succeeded
	I0911 04:14:09.338639    4629 start.go:365] acquiring machines lock for no-preload-616000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:14:10.003252    4629 start.go:369] acquired machines lock for "no-preload-616000" in 664.558583ms
	I0911 04:14:10.003392    4629 start.go:93] Provisioning new machine with config: &{Name:no-preload-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-616000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:14:10.003640    4629 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:14:10.014264    4629 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:14:10.063062    4629 start.go:159] libmachine.API.Create for "no-preload-616000" (driver="qemu2")
	I0911 04:14:10.063096    4629 client.go:168] LocalClient.Create starting
	I0911 04:14:10.063215    4629 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:14:10.063273    4629 main.go:141] libmachine: Decoding PEM data...
	I0911 04:14:10.063296    4629 main.go:141] libmachine: Parsing certificate...
	I0911 04:14:10.063380    4629 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:14:10.063415    4629 main.go:141] libmachine: Decoding PEM data...
	I0911 04:14:10.063432    4629 main.go:141] libmachine: Parsing certificate...
	I0911 04:14:10.063884    4629 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:14:10.190964    4629 main.go:141] libmachine: Creating SSH key...
	I0911 04:14:10.317529    4629 main.go:141] libmachine: Creating Disk image...
	I0911 04:14:10.317535    4629 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:14:10.317694    4629 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/disk.qcow2
	I0911 04:14:10.326096    4629 main.go:141] libmachine: STDOUT: 
	I0911 04:14:10.326111    4629 main.go:141] libmachine: STDERR: 
	I0911 04:14:10.326164    4629 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/disk.qcow2 +20000M
	I0911 04:14:10.333471    4629 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:14:10.333484    4629 main.go:141] libmachine: STDERR: 
	I0911 04:14:10.333500    4629 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/disk.qcow2
	I0911 04:14:10.333508    4629 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:14:10.333568    4629 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:49:9b:34:98:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/disk.qcow2
	I0911 04:14:10.335036    4629 main.go:141] libmachine: STDOUT: 
	I0911 04:14:10.335052    4629 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:14:10.335063    4629 client.go:171] LocalClient.Create took 271.971291ms
	I0911 04:14:11.556714    4629 cache.go:157] /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 exists
	I0911 04:14:11.556773    4629 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0" took 9.474857708s
	I0911 04:14:11.556801    4629 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 succeeded
	I0911 04:14:11.556840    4629 cache.go:87] Successfully saved all images to host disk.
	I0911 04:14:12.337281    4629 start.go:128] duration metric: createHost completed in 2.333604625s
	I0911 04:14:12.337364    4629 start.go:83] releasing machines lock for "no-preload-616000", held for 2.334151333s
	W0911 04:14:12.337757    4629 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-616000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-616000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:14:12.347891    4629 out.go:177] 
	W0911 04:14:12.351843    4629 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:14:12.351865    4629 out.go:239] * 
	* 
	W0911 04:14:12.354321    4629 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:14:12.362784    4629 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-616000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000: exit status 7 (66.429625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-327000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-327000 -n old-k8s-version-327000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-327000 -n old-k8s-version-327000: exit status 7 (31.353917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-327000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-327000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-327000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-327000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.52ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-327000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-327000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-327000 -n old-k8s-version-327000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-327000 -n old-k8s-version-327000: exit status 7 (31.737ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-327000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-327000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-327000 "sudo crictl images -o json": exit status 89 (47.141083ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-327000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-327000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-327000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-327000 -n old-k8s-version-327000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-327000 -n old-k8s-version-327000: exit status 7 (28.053458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-327000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-327000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-327000 --alsologtostderr -v=1: exit status 89 (44.344625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-327000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:14:07.107978    4749 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:14:07.108330    4749 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:07.108333    4749 out.go:309] Setting ErrFile to fd 2...
	I0911 04:14:07.108335    4749 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:07.108484    4749 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:14:07.108686    4749 out.go:303] Setting JSON to false
	I0911 04:14:07.108695    4749 mustload.go:65] Loading cluster: old-k8s-version-327000
	I0911 04:14:07.108877    4749 config.go:182] Loaded profile config "old-k8s-version-327000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0911 04:14:07.112723    4749 out.go:177] * The control plane node must be running for this command
	I0911 04:14:07.120711    4749 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-327000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-327000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-327000 -n old-k8s-version-327000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-327000 -n old-k8s-version-327000: exit status 7 (27.88875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-327000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-327000 -n old-k8s-version-327000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-327000 -n old-k8s-version-327000: exit status 7 (27.762708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-327000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-476000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-476000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (9.795250625s)

                                                
                                                
-- stdout --
	* [embed-certs-476000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-476000 in cluster embed-certs-476000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-476000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:14:07.573716    4772 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:14:07.573835    4772 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:07.573838    4772 out.go:309] Setting ErrFile to fd 2...
	I0911 04:14:07.573840    4772 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:07.573948    4772 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:14:07.575000    4772 out.go:303] Setting JSON to false
	I0911 04:14:07.591045    4772 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2621,"bootTime":1694428226,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:14:07.591105    4772 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:14:07.595750    4772 out.go:177] * [embed-certs-476000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:14:07.606745    4772 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:14:07.603742    4772 notify.go:220] Checking for updates...
	I0911 04:14:07.614671    4772 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:14:07.621742    4772 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:14:07.629755    4772 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:14:07.637638    4772 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:14:07.645559    4772 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:14:07.649935    4772 config.go:182] Loaded profile config "multinode-479000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:14:07.649991    4772 config.go:182] Loaded profile config "no-preload-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:14:07.650036    4772 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:14:07.652750    4772 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:14:07.659698    4772 start.go:298] selected driver: qemu2
	I0911 04:14:07.659702    4772 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:14:07.659708    4772 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:14:07.661662    4772 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:14:07.665735    4772 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:14:07.669758    4772 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:14:07.669779    4772 cni.go:84] Creating CNI manager for ""
	I0911 04:14:07.669785    4772 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:14:07.669789    4772 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:14:07.669795    4772 start_flags.go:321] config:
	{Name:embed-certs-476000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-476000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SS
HAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:14:07.673829    4772 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:07.681564    4772 out.go:177] * Starting control plane node embed-certs-476000 in cluster embed-certs-476000
	I0911 04:14:07.685725    4772 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:14:07.685741    4772 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:14:07.685755    4772 cache.go:57] Caching tarball of preloaded images
	I0911 04:14:07.685811    4772 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:14:07.685816    4772 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:14:07.685886    4772 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/embed-certs-476000/config.json ...
	I0911 04:14:07.685898    4772 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/embed-certs-476000/config.json: {Name:mkc64d466f9873be58e3e7878cb689175f920d08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:14:07.686105    4772 start.go:365] acquiring machines lock for embed-certs-476000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:14:07.686132    4772 start.go:369] acquired machines lock for "embed-certs-476000" in 22.583µs
	I0911 04:14:07.686142    4772 start.go:93] Provisioning new machine with config: &{Name:embed-certs-476000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-476000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:14:07.686166    4772 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:14:07.694723    4772 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:14:07.709485    4772 start.go:159] libmachine.API.Create for "embed-certs-476000" (driver="qemu2")
	I0911 04:14:07.709511    4772 client.go:168] LocalClient.Create starting
	I0911 04:14:07.709568    4772 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:14:07.709596    4772 main.go:141] libmachine: Decoding PEM data...
	I0911 04:14:07.709606    4772 main.go:141] libmachine: Parsing certificate...
	I0911 04:14:07.709644    4772 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:14:07.709662    4772 main.go:141] libmachine: Decoding PEM data...
	I0911 04:14:07.709676    4772 main.go:141] libmachine: Parsing certificate...
	I0911 04:14:07.709991    4772 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:14:07.829567    4772 main.go:141] libmachine: Creating SSH key...
	I0911 04:14:07.983243    4772 main.go:141] libmachine: Creating Disk image...
	I0911 04:14:07.983251    4772 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:14:07.983392    4772 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/disk.qcow2
	I0911 04:14:07.991890    4772 main.go:141] libmachine: STDOUT: 
	I0911 04:14:07.991904    4772 main.go:141] libmachine: STDERR: 
	I0911 04:14:07.991952    4772 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/disk.qcow2 +20000M
	I0911 04:14:07.999277    4772 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:14:07.999289    4772 main.go:141] libmachine: STDERR: 
	I0911 04:14:07.999305    4772 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/disk.qcow2
	I0911 04:14:07.999314    4772 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:14:07.999355    4772 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:8b:cc:f6:84:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/disk.qcow2
	I0911 04:14:08.000892    4772 main.go:141] libmachine: STDOUT: 
	I0911 04:14:08.000904    4772 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:14:08.000922    4772 client.go:171] LocalClient.Create took 291.412875ms
	I0911 04:14:10.003088    4772 start.go:128] duration metric: createHost completed in 2.316945917s
	I0911 04:14:10.003141    4772 start.go:83] releasing machines lock for "embed-certs-476000", held for 2.317067875s
	W0911 04:14:10.003196    4772 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:14:10.022250    4772 out.go:177] * Deleting "embed-certs-476000" in qemu2 ...
	W0911 04:14:10.039116    4772 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:14:10.039145    4772 start.go:687] Will try again in 5 seconds ...
	I0911 04:14:15.041126    4772 start.go:365] acquiring machines lock for embed-certs-476000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:14:15.041534    4772 start.go:369] acquired machines lock for "embed-certs-476000" in 329.5µs
	I0911 04:14:15.041684    4772 start.go:93] Provisioning new machine with config: &{Name:embed-certs-476000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-476000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:14:15.041966    4772 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:14:15.046907    4772 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:14:15.093981    4772 start.go:159] libmachine.API.Create for "embed-certs-476000" (driver="qemu2")
	I0911 04:14:15.094085    4772 client.go:168] LocalClient.Create starting
	I0911 04:14:15.094286    4772 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:14:15.094371    4772 main.go:141] libmachine: Decoding PEM data...
	I0911 04:14:15.094404    4772 main.go:141] libmachine: Parsing certificate...
	I0911 04:14:15.094514    4772 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:14:15.094557    4772 main.go:141] libmachine: Decoding PEM data...
	I0911 04:14:15.094577    4772 main.go:141] libmachine: Parsing certificate...
	I0911 04:14:15.095286    4772 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:14:15.224349    4772 main.go:141] libmachine: Creating SSH key...
	I0911 04:14:15.284014    4772 main.go:141] libmachine: Creating Disk image...
	I0911 04:14:15.284023    4772 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:14:15.284162    4772 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/disk.qcow2
	I0911 04:14:15.292513    4772 main.go:141] libmachine: STDOUT: 
	I0911 04:14:15.292532    4772 main.go:141] libmachine: STDERR: 
	I0911 04:14:15.292580    4772 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/disk.qcow2 +20000M
	I0911 04:14:15.299747    4772 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:14:15.299769    4772 main.go:141] libmachine: STDERR: 
	I0911 04:14:15.299782    4772 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/disk.qcow2
	I0911 04:14:15.299788    4772 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:14:15.299829    4772 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:0e:ad:22:a1:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/disk.qcow2
	I0911 04:14:15.301368    4772 main.go:141] libmachine: STDOUT: 
	I0911 04:14:15.301383    4772 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:14:15.301395    4772 client.go:171] LocalClient.Create took 207.310333ms
	I0911 04:14:17.303500    4772 start.go:128] duration metric: createHost completed in 2.261579417s
	I0911 04:14:17.303567    4772 start.go:83] releasing machines lock for "embed-certs-476000", held for 2.262079875s
	W0911 04:14:17.304034    4772 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-476000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-476000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:14:17.311580    4772 out.go:177] 
	W0911 04:14:17.315688    4772 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:14:17.315713    4772 out.go:239] * 
	* 
	W0911 04:14:17.318147    4772 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:14:17.327421    4772 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-476000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-476000 -n embed-certs-476000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-476000 -n embed-certs-476000: exit status 7 (65.001667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-476000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-616000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-616000 create -f testdata/busybox.yaml: exit status 1 (29.718875ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-616000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000: exit status 7 (28.844291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-616000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000: exit status 7 (28.5245ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-616000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-616000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-616000 describe deploy/metrics-server -n kube-system: exit status 1 (26.147584ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-616000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-616000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000: exit status 7 (28.078458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-616000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-616000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (5.168361542s)

                                                
                                                
-- stdout --
	* [no-preload-616000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-616000 in cluster no-preload-616000
	* Restarting existing qemu2 VM for "no-preload-616000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-616000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:14:12.822042    4807 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:14:12.822164    4807 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:12.822167    4807 out.go:309] Setting ErrFile to fd 2...
	I0911 04:14:12.822169    4807 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:12.822276    4807 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:14:12.823263    4807 out.go:303] Setting JSON to false
	I0911 04:14:12.838380    4807 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2626,"bootTime":1694428226,"procs":417,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:14:12.838471    4807 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:14:12.842511    4807 out.go:177] * [no-preload-616000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:14:12.849536    4807 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:14:12.849599    4807 notify.go:220] Checking for updates...
	I0911 04:14:12.857535    4807 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:14:12.861526    4807 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:14:12.864518    4807 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:14:12.867544    4807 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:14:12.870567    4807 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:14:12.873685    4807 config.go:182] Loaded profile config "no-preload-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:14:12.873917    4807 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:14:12.878518    4807 out.go:177] * Using the qemu2 driver based on existing profile
	I0911 04:14:12.885487    4807 start.go:298] selected driver: qemu2
	I0911 04:14:12.885494    4807 start.go:902] validating driver "qemu2" against &{Name:no-preload-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-616000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:14:12.885569    4807 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:14:12.887658    4807 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:14:12.887690    4807 cni.go:84] Creating CNI manager for ""
	I0911 04:14:12.887697    4807 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:14:12.887703    4807 start_flags.go:321] config:
	{Name:no-preload-616000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:no-preload-616000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:14:12.891746    4807 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:12.898501    4807 out.go:177] * Starting control plane node no-preload-616000 in cluster no-preload-616000
	I0911 04:14:12.902497    4807 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:14:12.902572    4807 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/no-preload-616000/config.json ...
	I0911 04:14:12.902597    4807 cache.go:107] acquiring lock: {Name:mk8369bcdd9b846fad76d05d4bf65b5c2f784223 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:12.902626    4807 cache.go:107] acquiring lock: {Name:mk0937927e55a208b56fb5051dfed2c2c0dac040 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:12.902628    4807 cache.go:107] acquiring lock: {Name:mkd718ae9faa29e8b5e7d6aa56fce9d7de898249 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:12.902668    4807 cache.go:115] /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0911 04:14:12.902673    4807 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 78.375µs
	I0911 04:14:12.902681    4807 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0911 04:14:12.902685    4807 cache.go:115] /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 exists
	I0911 04:14:12.902690    4807 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.1" -> "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1" took 86.167µs
	I0911 04:14:12.902694    4807 cache.go:107] acquiring lock: {Name:mke4ac6cc3b1650eb885729ecbc99df67986526c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:12.902727    4807 cache.go:115] /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 exists
	I0911 04:14:12.902733    4807 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.1" -> "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1" took 111.166µs
	I0911 04:14:12.902738    4807 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.1 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.28.1 succeeded
	I0911 04:14:12.902735    4807 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.1 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.28.1 succeeded
	I0911 04:14:12.902747    4807 cache.go:115] /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 exists
	I0911 04:14:12.902752    4807 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0" took 58.833µs
	I0911 04:14:12.902757    4807 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.9-0 succeeded
	I0911 04:14:12.902777    4807 cache.go:107] acquiring lock: {Name:mka3f37d209756a9d64805bb97a78cbfa8750a77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:12.902796    4807 cache.go:107] acquiring lock: {Name:mk906eee170c23ef85d73de7c571c478a874591c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:12.902807    4807 cache.go:107] acquiring lock: {Name:mk721585c00738463f6a0d718feec4b5affe3b15 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:12.902786    4807 cache.go:107] acquiring lock: {Name:mk4427db8803bd54ebc0df896aa4bf3d5497ea78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:12.902818    4807 cache.go:115] /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 exists
	I0911 04:14:12.902856    4807 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.1" -> "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1" took 82.333µs
	I0911 04:14:12.902865    4807 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.1 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.28.1 succeeded
	I0911 04:14:12.902865    4807 cache.go:115] /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 exists
	I0911 04:14:12.902897    4807 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.1" -> "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1" took 133.458µs
	I0911 04:14:12.902902    4807 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.1 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.28.1 succeeded
	I0911 04:14:12.902870    4807 cache.go:115] /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0911 04:14:12.902907    4807 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 156.417µs
	I0911 04:14:12.902911    4807 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0911 04:14:12.902924    4807 start.go:365] acquiring machines lock for no-preload-616000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:14:12.902966    4807 start.go:369] acquired machines lock for "no-preload-616000" in 36.833µs
	I0911 04:14:12.902972    4807 cache.go:115] /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0911 04:14:12.902978    4807 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 204.375µs
	I0911 04:14:12.902982    4807 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:14:12.902987    4807 fix.go:54] fixHost starting: 
	I0911 04:14:12.902983    4807 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0911 04:14:12.902998    4807 cache.go:87] Successfully saved all images to host disk.
	I0911 04:14:12.903123    4807 fix.go:102] recreateIfNeeded on no-preload-616000: state=Stopped err=<nil>
	W0911 04:14:12.903133    4807 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:14:12.911489    4807 out.go:177] * Restarting existing qemu2 VM for "no-preload-616000" ...
	I0911 04:14:12.915516    4807 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:49:9b:34:98:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/disk.qcow2
	I0911 04:14:12.917606    4807 main.go:141] libmachine: STDOUT: 
	I0911 04:14:12.917622    4807 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:14:12.917658    4807 fix.go:56] fixHost completed within 14.670917ms
	I0911 04:14:12.917663    4807 start.go:83] releasing machines lock for "no-preload-616000", held for 14.690708ms
	W0911 04:14:12.917673    4807 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:14:12.917715    4807 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:14:12.917719    4807 start.go:687] Will try again in 5 seconds ...
	I0911 04:14:17.919638    4807 start.go:365] acquiring machines lock for no-preload-616000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:14:17.919793    4807 start.go:369] acquired machines lock for "no-preload-616000" in 98.417µs
	I0911 04:14:17.919830    4807 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:14:17.919834    4807 fix.go:54] fixHost starting: 
	I0911 04:14:17.920062    4807 fix.go:102] recreateIfNeeded on no-preload-616000: state=Stopped err=<nil>
	W0911 04:14:17.920069    4807 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:14:17.924117    4807 out.go:177] * Restarting existing qemu2 VM for "no-preload-616000" ...
	I0911 04:14:17.932398    4807 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:49:9b:34:98:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/no-preload-616000/disk.qcow2
	I0911 04:14:17.934839    4807 main.go:141] libmachine: STDOUT: 
	I0911 04:14:17.934859    4807 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:14:17.934888    4807 fix.go:56] fixHost completed within 15.047042ms
	I0911 04:14:17.934895    4807 start.go:83] releasing machines lock for "no-preload-616000", held for 15.0885ms
	W0911 04:14:17.934956    4807 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-616000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-616000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:14:17.941325    4807 out.go:177] 
	W0911 04:14:17.945328    4807 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:14:17.945341    4807 out.go:239] * 
	* 
	W0911 04:14:17.946171    4807 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:14:17.955103    4807 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-616000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000: exit status 7 (40.628625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-476000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-476000 create -f testdata/busybox.yaml: exit status 1 (28.931416ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-476000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-476000 -n embed-certs-476000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-476000 -n embed-certs-476000: exit status 7 (27.812583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-476000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-476000 -n embed-certs-476000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-476000 -n embed-certs-476000: exit status 7 (28.3715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-476000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-476000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-476000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-476000 describe deploy/metrics-server -n kube-system: exit status 1 (24.903417ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-476000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-476000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-476000 -n embed-certs-476000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-476000 -n embed-certs-476000: exit status 7 (27.87075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-476000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-476000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-476000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (5.174624833s)

                                                
                                                
-- stdout --
	* [embed-certs-476000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-476000 in cluster embed-certs-476000
	* Restarting existing qemu2 VM for "embed-certs-476000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-476000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:14:17.778509    4836 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:14:17.778640    4836 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:17.778643    4836 out.go:309] Setting ErrFile to fd 2...
	I0911 04:14:17.778646    4836 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:17.778752    4836 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:14:17.779683    4836 out.go:303] Setting JSON to false
	I0911 04:14:17.794801    4836 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2631,"bootTime":1694428226,"procs":417,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:14:17.794853    4836 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:14:17.800250    4836 out.go:177] * [embed-certs-476000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:14:17.807352    4836 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:14:17.807350    4836 notify.go:220] Checking for updates...
	I0911 04:14:17.810310    4836 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:14:17.811771    4836 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:14:17.815284    4836 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:14:17.818320    4836 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:14:17.821362    4836 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:14:17.824474    4836 config.go:182] Loaded profile config "embed-certs-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:14:17.824688    4836 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:14:17.835220    4836 out.go:177] * Using the qemu2 driver based on existing profile
	I0911 04:14:17.838344    4836 start.go:298] selected driver: qemu2
	I0911 04:14:17.838349    4836 start.go:902] validating driver "qemu2" against &{Name:embed-certs-476000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-476000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:14:17.838422    4836 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:14:17.840434    4836 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:14:17.840458    4836 cni.go:84] Creating CNI manager for ""
	I0911 04:14:17.840464    4836 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:14:17.840472    4836 start_flags.go:321] config:
	{Name:embed-certs-476000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:embed-certs-476000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:14:17.844382    4836 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:17.852253    4836 out.go:177] * Starting control plane node embed-certs-476000 in cluster embed-certs-476000
	I0911 04:14:17.856257    4836 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:14:17.856291    4836 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:14:17.856306    4836 cache.go:57] Caching tarball of preloaded images
	I0911 04:14:17.856365    4836 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:14:17.856370    4836 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:14:17.856426    4836 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/embed-certs-476000/config.json ...
	I0911 04:14:17.856746    4836 start.go:365] acquiring machines lock for embed-certs-476000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:14:17.856772    4836 start.go:369] acquired machines lock for "embed-certs-476000" in 19.792µs
	I0911 04:14:17.856782    4836 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:14:17.856786    4836 fix.go:54] fixHost starting: 
	I0911 04:14:17.856902    4836 fix.go:102] recreateIfNeeded on embed-certs-476000: state=Stopped err=<nil>
	W0911 04:14:17.856910    4836 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:14:17.861240    4836 out.go:177] * Restarting existing qemu2 VM for "embed-certs-476000" ...
	I0911 04:14:17.869297    4836 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:0e:ad:22:a1:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/disk.qcow2
	I0911 04:14:17.871196    4836 main.go:141] libmachine: STDOUT: 
	I0911 04:14:17.871211    4836 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:14:17.871237    4836 fix.go:56] fixHost completed within 14.450584ms
	I0911 04:14:17.871241    4836 start.go:83] releasing machines lock for "embed-certs-476000", held for 14.4655ms
	W0911 04:14:17.871247    4836 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:14:17.871276    4836 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:14:17.871280    4836 start.go:687] Will try again in 5 seconds ...
	I0911 04:14:22.871769    4836 start.go:365] acquiring machines lock for embed-certs-476000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:14:22.872182    4836 start.go:369] acquired machines lock for "embed-certs-476000" in 338.958µs
	I0911 04:14:22.872310    4836 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:14:22.872330    4836 fix.go:54] fixHost starting: 
	I0911 04:14:22.873138    4836 fix.go:102] recreateIfNeeded on embed-certs-476000: state=Stopped err=<nil>
	W0911 04:14:22.873165    4836 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:14:22.881877    4836 out.go:177] * Restarting existing qemu2 VM for "embed-certs-476000" ...
	I0911 04:14:22.885206    4836 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:0e:ad:22:a1:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/embed-certs-476000/disk.qcow2
	I0911 04:14:22.893808    4836 main.go:141] libmachine: STDOUT: 
	I0911 04:14:22.893854    4836 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:14:22.893944    4836 fix.go:56] fixHost completed within 21.612083ms
	I0911 04:14:22.893963    4836 start.go:83] releasing machines lock for "embed-certs-476000", held for 21.761458ms
	W0911 04:14:22.894161    4836 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-476000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-476000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:14:22.901946    4836 out.go:177] 
	W0911 04:14:22.905008    4836 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:14:22.905037    4836 out.go:239] * 
	* 
	W0911 04:14:22.907649    4836 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:14:22.916888    4836 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-476000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-476000 -n embed-certs-476000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-476000 -n embed-certs-476000: exit status 7 (66.736ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-476000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-616000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000: exit status 7 (28.963792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-616000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-616000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-616000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.413959ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-616000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-616000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000: exit status 7 (28.085625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-616000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-616000 "sudo crictl images -o json": exit status 89 (36.73925ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-616000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-616000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-616000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000: exit status 7 (28.462916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-616000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-616000 --alsologtostderr -v=1: exit status 89 (39.990958ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-616000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:14:18.182149    4855 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:14:18.182296    4855 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:18.182299    4855 out.go:309] Setting ErrFile to fd 2...
	I0911 04:14:18.182301    4855 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:18.182420    4855 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:14:18.182624    4855 out.go:303] Setting JSON to false
	I0911 04:14:18.182633    4855 mustload.go:65] Loading cluster: no-preload-616000
	I0911 04:14:18.182802    4855 config.go:182] Loaded profile config "no-preload-616000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:14:18.186745    4855 out.go:177] * The control plane node must be running for this command
	I0911 04:14:18.190775    4855 out.go:177]   To start a cluster, run: "minikube start -p no-preload-616000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-616000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000: exit status 7 (28.001292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-616000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000: exit status 7 (28.140291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-616000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-405000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-405000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (9.732731541s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-405000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-405000 in cluster default-k8s-diff-port-405000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-405000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:14:18.870882    4890 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:14:18.870992    4890 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:18.870995    4890 out.go:309] Setting ErrFile to fd 2...
	I0911 04:14:18.870998    4890 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:18.871105    4890 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:14:18.872115    4890 out.go:303] Setting JSON to false
	I0911 04:14:18.887027    4890 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2632,"bootTime":1694428226,"procs":417,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:14:18.887096    4890 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:14:18.890757    4890 out.go:177] * [default-k8s-diff-port-405000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:14:18.896675    4890 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:14:18.899723    4890 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:14:18.896740    4890 notify.go:220] Checking for updates...
	I0911 04:14:18.906602    4890 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:14:18.909656    4890 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:14:18.912635    4890 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:14:18.915663    4890 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:14:18.919048    4890 config.go:182] Loaded profile config "embed-certs-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:14:18.919119    4890 config.go:182] Loaded profile config "multinode-479000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:14:18.919160    4890 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:14:18.922550    4890 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:14:18.933720    4890 start.go:298] selected driver: qemu2
	I0911 04:14:18.933728    4890 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:14:18.933735    4890 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:14:18.935679    4890 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 04:14:18.938693    4890 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:14:18.941773    4890 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:14:18.941814    4890 cni.go:84] Creating CNI manager for ""
	I0911 04:14:18.941821    4890 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:14:18.941826    4890 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:14:18.941832    4890 start_flags.go:321] config:
	{Name:default-k8s-diff-port-405000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-405000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:14:18.946194    4890 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:18.954653    4890 out.go:177] * Starting control plane node default-k8s-diff-port-405000 in cluster default-k8s-diff-port-405000
	I0911 04:14:18.958664    4890 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:14:18.958685    4890 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:14:18.958705    4890 cache.go:57] Caching tarball of preloaded images
	I0911 04:14:18.958766    4890 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:14:18.958772    4890 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:14:18.958889    4890 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/default-k8s-diff-port-405000/config.json ...
	I0911 04:14:18.958903    4890 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/default-k8s-diff-port-405000/config.json: {Name:mk3118b4c172cb58b08cf019d3f1e39e4c3de4fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:14:18.959118    4890 start.go:365] acquiring machines lock for default-k8s-diff-port-405000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:14:18.959152    4890 start.go:369] acquired machines lock for "default-k8s-diff-port-405000" in 25.833µs
	I0911 04:14:18.959164    4890 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-405000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-405000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:14:18.959198    4890 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:14:18.967713    4890 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:14:18.983835    4890 start.go:159] libmachine.API.Create for "default-k8s-diff-port-405000" (driver="qemu2")
	I0911 04:14:18.983850    4890 client.go:168] LocalClient.Create starting
	I0911 04:14:18.983907    4890 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:14:18.983938    4890 main.go:141] libmachine: Decoding PEM data...
	I0911 04:14:18.983948    4890 main.go:141] libmachine: Parsing certificate...
	I0911 04:14:18.983989    4890 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:14:18.984009    4890 main.go:141] libmachine: Decoding PEM data...
	I0911 04:14:18.984019    4890 main.go:141] libmachine: Parsing certificate...
	I0911 04:14:18.984363    4890 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:14:19.097185    4890 main.go:141] libmachine: Creating SSH key...
	I0911 04:14:19.196744    4890 main.go:141] libmachine: Creating Disk image...
	I0911 04:14:19.196751    4890 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:14:19.196885    4890 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/disk.qcow2
	I0911 04:14:19.205481    4890 main.go:141] libmachine: STDOUT: 
	I0911 04:14:19.205498    4890 main.go:141] libmachine: STDERR: 
	I0911 04:14:19.205563    4890 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/disk.qcow2 +20000M
	I0911 04:14:19.212625    4890 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:14:19.212637    4890 main.go:141] libmachine: STDERR: 
	I0911 04:14:19.212651    4890 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/disk.qcow2
	I0911 04:14:19.212663    4890 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:14:19.212699    4890 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:fd:0a:77:36:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/disk.qcow2
	I0911 04:14:19.214179    4890 main.go:141] libmachine: STDOUT: 
	I0911 04:14:19.214193    4890 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:14:19.214211    4890 client.go:171] LocalClient.Create took 230.36325ms
	I0911 04:14:21.216379    4890 start.go:128] duration metric: createHost completed in 2.257236542s
	I0911 04:14:21.216436    4890 start.go:83] releasing machines lock for "default-k8s-diff-port-405000", held for 2.257343542s
	W0911 04:14:21.216491    4890 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:14:21.228899    4890 out.go:177] * Deleting "default-k8s-diff-port-405000" in qemu2 ...
	W0911 04:14:21.249060    4890 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:14:21.249086    4890 start.go:687] Will try again in 5 seconds ...
	I0911 04:14:26.251194    4890 start.go:365] acquiring machines lock for default-k8s-diff-port-405000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:14:26.251588    4890 start.go:369] acquired machines lock for "default-k8s-diff-port-405000" in 317.167µs
	I0911 04:14:26.251707    4890 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-405000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-405000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:14:26.252067    4890 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:14:26.257706    4890 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:14:26.305229    4890 start.go:159] libmachine.API.Create for "default-k8s-diff-port-405000" (driver="qemu2")
	I0911 04:14:26.305272    4890 client.go:168] LocalClient.Create starting
	I0911 04:14:26.305388    4890 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:14:26.305464    4890 main.go:141] libmachine: Decoding PEM data...
	I0911 04:14:26.305487    4890 main.go:141] libmachine: Parsing certificate...
	I0911 04:14:26.305571    4890 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:14:26.305622    4890 main.go:141] libmachine: Decoding PEM data...
	I0911 04:14:26.305640    4890 main.go:141] libmachine: Parsing certificate...
	I0911 04:14:26.306135    4890 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:14:26.435144    4890 main.go:141] libmachine: Creating SSH key...
	I0911 04:14:26.518301    4890 main.go:141] libmachine: Creating Disk image...
	I0911 04:14:26.518310    4890 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:14:26.518445    4890 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/disk.qcow2
	I0911 04:14:26.526900    4890 main.go:141] libmachine: STDOUT: 
	I0911 04:14:26.526920    4890 main.go:141] libmachine: STDERR: 
	I0911 04:14:26.526979    4890 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/disk.qcow2 +20000M
	I0911 04:14:26.534030    4890 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:14:26.534051    4890 main.go:141] libmachine: STDERR: 
	I0911 04:14:26.534065    4890 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/disk.qcow2
	I0911 04:14:26.534073    4890 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:14:26.534132    4890 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:a7:a5:b4:d3:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/disk.qcow2
	I0911 04:14:26.535634    4890 main.go:141] libmachine: STDOUT: 
	I0911 04:14:26.535647    4890 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:14:26.535662    4890 client.go:171] LocalClient.Create took 230.385833ms
	I0911 04:14:28.537800    4890 start.go:128] duration metric: createHost completed in 2.285767333s
	I0911 04:14:28.538096    4890 start.go:83] releasing machines lock for "default-k8s-diff-port-405000", held for 2.286351s
	W0911 04:14:28.538448    4890 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-405000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-405000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:14:28.546102    4890 out.go:177] 
	W0911 04:14:28.550236    4890 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:14:28.550266    4890 out.go:239] * 
	* 
	W0911 04:14:28.552817    4890 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:14:28.563078    4890 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-405000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-405000 -n default-k8s-diff-port-405000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-405000 -n default-k8s-diff-port-405000: exit status 7 (65.278666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-405000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-476000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-476000 -n embed-certs-476000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-476000 -n embed-certs-476000: exit status 7 (32.42175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-476000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-476000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-476000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-476000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.012042ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-476000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-476000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-476000 -n embed-certs-476000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-476000 -n embed-certs-476000: exit status 7 (28.319916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-476000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-476000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-476000 "sudo crictl images -o json": exit status 89 (37.396875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-476000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-476000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-476000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-476000 -n embed-certs-476000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-476000 -n embed-certs-476000: exit status 7 (28.240416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-476000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-476000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-476000 --alsologtostderr -v=1: exit status 89 (39.402875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-476000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:14:23.176020    4912 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:14:23.176163    4912 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:23.176166    4912 out.go:309] Setting ErrFile to fd 2...
	I0911 04:14:23.176168    4912 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:23.176277    4912 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:14:23.176477    4912 out.go:303] Setting JSON to false
	I0911 04:14:23.176485    4912 mustload.go:65] Loading cluster: embed-certs-476000
	I0911 04:14:23.176671    4912 config.go:182] Loaded profile config "embed-certs-476000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:14:23.179921    4912 out.go:177] * The control plane node must be running for this command
	I0911 04:14:23.183953    4912 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-476000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-476000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-476000 -n embed-certs-476000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-476000 -n embed-certs-476000: exit status 7 (27.934125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-476000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-476000 -n embed-certs-476000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-476000 -n embed-certs-476000: exit status 7 (27.994292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-476000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-846000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-846000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (9.755285458s)

                                                
                                                
-- stdout --
	* [newest-cni-846000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-846000 in cluster newest-cni-846000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-846000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:14:23.632597    4935 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:14:23.632700    4935 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:23.632702    4935 out.go:309] Setting ErrFile to fd 2...
	I0911 04:14:23.632705    4935 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:23.632809    4935 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:14:23.633805    4935 out.go:303] Setting JSON to false
	I0911 04:14:23.649012    4935 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2637,"bootTime":1694428226,"procs":417,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:14:23.649075    4935 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:14:23.653982    4935 out.go:177] * [newest-cni-846000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:14:23.660908    4935 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:14:23.660975    4935 notify.go:220] Checking for updates...
	I0911 04:14:23.668907    4935 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:14:23.672980    4935 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:14:23.675958    4935 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:14:23.678976    4935 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:14:23.681908    4935 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:14:23.685335    4935 config.go:182] Loaded profile config "default-k8s-diff-port-405000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:14:23.685396    4935 config.go:182] Loaded profile config "multinode-479000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:14:23.685450    4935 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:14:23.689933    4935 out.go:177] * Using the qemu2 driver based on user configuration
	I0911 04:14:23.696936    4935 start.go:298] selected driver: qemu2
	I0911 04:14:23.696943    4935 start.go:902] validating driver "qemu2" against <nil>
	I0911 04:14:23.696950    4935 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:14:23.698979    4935 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W0911 04:14:23.699006    4935 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0911 04:14:23.706886    4935 out.go:177] * Automatically selected the socket_vmnet network
	I0911 04:14:23.709976    4935 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0911 04:14:23.709998    4935 cni.go:84] Creating CNI manager for ""
	I0911 04:14:23.710005    4935 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:14:23.710010    4935 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 04:14:23.710015    4935 start_flags.go:321] config:
	{Name:newest-cni-846000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-846000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:14:23.714333    4935 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:23.719963    4935 out.go:177] * Starting control plane node newest-cni-846000 in cluster newest-cni-846000
	I0911 04:14:23.723913    4935 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:14:23.723937    4935 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:14:23.723958    4935 cache.go:57] Caching tarball of preloaded images
	I0911 04:14:23.724021    4935 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:14:23.724033    4935 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:14:23.724119    4935 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/newest-cni-846000/config.json ...
	I0911 04:14:23.724132    4935 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/newest-cni-846000/config.json: {Name:mk679d09d5f02a06bb661ca29bc918494a05132b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 04:14:23.724343    4935 start.go:365] acquiring machines lock for newest-cni-846000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:14:23.724374    4935 start.go:369] acquired machines lock for "newest-cni-846000" in 25.5µs
	I0911 04:14:23.724386    4935 start.go:93] Provisioning new machine with config: &{Name:newest-cni-846000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-846000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:14:23.724415    4935 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:14:23.732879    4935 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:14:23.749602    4935 start.go:159] libmachine.API.Create for "newest-cni-846000" (driver="qemu2")
	I0911 04:14:23.749627    4935 client.go:168] LocalClient.Create starting
	I0911 04:14:23.749700    4935 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:14:23.749729    4935 main.go:141] libmachine: Decoding PEM data...
	I0911 04:14:23.749750    4935 main.go:141] libmachine: Parsing certificate...
	I0911 04:14:23.749792    4935 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:14:23.749810    4935 main.go:141] libmachine: Decoding PEM data...
	I0911 04:14:23.749817    4935 main.go:141] libmachine: Parsing certificate...
	I0911 04:14:23.750153    4935 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:14:23.869124    4935 main.go:141] libmachine: Creating SSH key...
	I0911 04:14:24.038175    4935 main.go:141] libmachine: Creating Disk image...
	I0911 04:14:24.038182    4935 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:14:24.038334    4935 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/disk.qcow2
	I0911 04:14:24.046990    4935 main.go:141] libmachine: STDOUT: 
	I0911 04:14:24.047005    4935 main.go:141] libmachine: STDERR: 
	I0911 04:14:24.047053    4935 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/disk.qcow2 +20000M
	I0911 04:14:24.054218    4935 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:14:24.054230    4935 main.go:141] libmachine: STDERR: 
	I0911 04:14:24.054249    4935 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/disk.qcow2
	I0911 04:14:24.054257    4935 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:14:24.054302    4935 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:1f:42:8b:c1:bb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/disk.qcow2
	I0911 04:14:24.055773    4935 main.go:141] libmachine: STDOUT: 
	I0911 04:14:24.055785    4935 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:14:24.055807    4935 client.go:171] LocalClient.Create took 306.177417ms
	I0911 04:14:26.057907    4935 start.go:128] duration metric: createHost completed in 2.333545875s
	I0911 04:14:26.057968    4935 start.go:83] releasing machines lock for "newest-cni-846000", held for 2.333657125s
	W0911 04:14:26.058030    4935 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:14:26.065550    4935 out.go:177] * Deleting "newest-cni-846000" in qemu2 ...
	W0911 04:14:26.085839    4935 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:14:26.085869    4935 start.go:687] Will try again in 5 seconds ...
	I0911 04:14:31.087967    4935 start.go:365] acquiring machines lock for newest-cni-846000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:14:31.088388    4935 start.go:369] acquired machines lock for "newest-cni-846000" in 323.167µs
	I0911 04:14:31.088495    4935 start.go:93] Provisioning new machine with config: &{Name:newest-cni-846000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-846000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0911 04:14:31.088814    4935 start.go:125] createHost starting for "" (driver="qemu2")
	I0911 04:14:31.098379    4935 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 04:14:31.145400    4935 start.go:159] libmachine.API.Create for "newest-cni-846000" (driver="qemu2")
	I0911 04:14:31.145434    4935 client.go:168] LocalClient.Create starting
	I0911 04:14:31.145543    4935 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/ca.pem
	I0911 04:14:31.145588    4935 main.go:141] libmachine: Decoding PEM data...
	I0911 04:14:31.145605    4935 main.go:141] libmachine: Parsing certificate...
	I0911 04:14:31.145696    4935 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17223-1124/.minikube/certs/cert.pem
	I0911 04:14:31.145724    4935 main.go:141] libmachine: Decoding PEM data...
	I0911 04:14:31.145737    4935 main.go:141] libmachine: Parsing certificate...
	I0911 04:14:31.146320    4935 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso...
	I0911 04:14:31.272079    4935 main.go:141] libmachine: Creating SSH key...
	I0911 04:14:31.299655    4935 main.go:141] libmachine: Creating Disk image...
	I0911 04:14:31.299661    4935 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0911 04:14:31.299812    4935 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/disk.qcow2.raw /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/disk.qcow2
	I0911 04:14:31.308260    4935 main.go:141] libmachine: STDOUT: 
	I0911 04:14:31.308274    4935 main.go:141] libmachine: STDERR: 
	I0911 04:14:31.308332    4935 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/disk.qcow2 +20000M
	I0911 04:14:31.315444    4935 main.go:141] libmachine: STDOUT: Image resized.
	
	I0911 04:14:31.315455    4935 main.go:141] libmachine: STDERR: 
	I0911 04:14:31.315465    4935 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/disk.qcow2
	I0911 04:14:31.315471    4935 main.go:141] libmachine: Starting QEMU VM...
	I0911 04:14:31.315517    4935 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:25:7c:df:fd:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/disk.qcow2
	I0911 04:14:31.317044    4935 main.go:141] libmachine: STDOUT: 
	I0911 04:14:31.317061    4935 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:14:31.317077    4935 client.go:171] LocalClient.Create took 171.641708ms
	I0911 04:14:33.319172    4935 start.go:128] duration metric: createHost completed in 2.230402959s
	I0911 04:14:33.319235    4935 start.go:83] releasing machines lock for "newest-cni-846000", held for 2.230896125s
	W0911 04:14:33.319678    4935 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-846000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-846000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:14:33.330212    4935 out.go:177] 
	W0911 04:14:33.335174    4935 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:14:33.335199    4935 out.go:239] * 
	* 
	W0911 04:14:33.338149    4935 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:14:33.347103    4935 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-846000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-846000 -n newest-cni-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-846000 -n newest-cni-846000: exit status 7 (66.334875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-846000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-405000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-405000 create -f testdata/busybox.yaml: exit status 1 (29.30525ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-405000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-405000 -n default-k8s-diff-port-405000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-405000 -n default-k8s-diff-port-405000: exit status 7 (28.811958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-405000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-405000 -n default-k8s-diff-port-405000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-405000 -n default-k8s-diff-port-405000: exit status 7 (28.180375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-405000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-405000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-405000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-405000 describe deploy/metrics-server -n kube-system: exit status 1 (25.588125ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-405000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-405000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-405000 -n default-k8s-diff-port-405000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-405000 -n default-k8s-diff-port-405000: exit status 7 (28.411333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-405000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-405000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-405000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (5.172908791s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-405000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-405000 in cluster default-k8s-diff-port-405000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-405000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-405000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:14:29.017043    4967 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:14:29.017139    4967 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:29.017142    4967 out.go:309] Setting ErrFile to fd 2...
	I0911 04:14:29.017144    4967 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:29.017249    4967 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:14:29.018197    4967 out.go:303] Setting JSON to false
	I0911 04:14:29.033165    4967 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2643,"bootTime":1694428226,"procs":416,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:14:29.033249    4967 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:14:29.037755    4967 out.go:177] * [default-k8s-diff-port-405000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:14:29.040834    4967 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:14:29.040910    4967 notify.go:220] Checking for updates...
	I0911 04:14:29.044683    4967 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:14:29.048757    4967 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:14:29.051778    4967 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:14:29.054792    4967 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:14:29.057765    4967 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:14:29.061092    4967 config.go:182] Loaded profile config "default-k8s-diff-port-405000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:14:29.061359    4967 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:14:29.065681    4967 out.go:177] * Using the qemu2 driver based on existing profile
	I0911 04:14:29.072783    4967 start.go:298] selected driver: qemu2
	I0911 04:14:29.072790    4967 start.go:902] validating driver "qemu2" against &{Name:default-k8s-diff-port-405000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-405000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:14:29.072856    4967 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:14:29.074834    4967 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 04:14:29.074859    4967 cni.go:84] Creating CNI manager for ""
	I0911 04:14:29.074865    4967 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:14:29.074870    4967 start_flags.go:321] config:
	{Name:default-k8s-diff-port-405000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-4050
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:14:29.078668    4967 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:29.085753    4967 out.go:177] * Starting control plane node default-k8s-diff-port-405000 in cluster default-k8s-diff-port-405000
	I0911 04:14:29.089755    4967 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:14:29.089771    4967 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:14:29.089786    4967 cache.go:57] Caching tarball of preloaded images
	I0911 04:14:29.089836    4967 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:14:29.089841    4967 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:14:29.089898    4967 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/default-k8s-diff-port-405000/config.json ...
	I0911 04:14:29.090263    4967 start.go:365] acquiring machines lock for default-k8s-diff-port-405000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:14:29.090290    4967 start.go:369] acquired machines lock for "default-k8s-diff-port-405000" in 20.958µs
	I0911 04:14:29.090300    4967 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:14:29.090303    4967 fix.go:54] fixHost starting: 
	I0911 04:14:29.090421    4967 fix.go:102] recreateIfNeeded on default-k8s-diff-port-405000: state=Stopped err=<nil>
	W0911 04:14:29.090430    4967 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:14:29.094847    4967 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-405000" ...
	I0911 04:14:29.102770    4967 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:a7:a5:b4:d3:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/disk.qcow2
	I0911 04:14:29.104561    4967 main.go:141] libmachine: STDOUT: 
	I0911 04:14:29.104606    4967 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:14:29.104635    4967 fix.go:56] fixHost completed within 14.329792ms
	I0911 04:14:29.104640    4967 start.go:83] releasing machines lock for "default-k8s-diff-port-405000", held for 14.347042ms
	W0911 04:14:29.104646    4967 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:14:29.104677    4967 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:14:29.104684    4967 start.go:687] Will try again in 5 seconds ...
	I0911 04:14:34.106716    4967 start.go:365] acquiring machines lock for default-k8s-diff-port-405000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:14:34.107047    4967 start.go:369] acquired machines lock for "default-k8s-diff-port-405000" in 246.083µs
	I0911 04:14:34.107162    4967 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:14:34.107184    4967 fix.go:54] fixHost starting: 
	I0911 04:14:34.107923    4967 fix.go:102] recreateIfNeeded on default-k8s-diff-port-405000: state=Stopped err=<nil>
	W0911 04:14:34.107947    4967 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:14:34.117311    4967 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-405000" ...
	I0911 04:14:34.121404    4967 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:a7:a5:b4:d3:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/default-k8s-diff-port-405000/disk.qcow2
	I0911 04:14:34.129775    4967 main.go:141] libmachine: STDOUT: 
	I0911 04:14:34.129836    4967 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:14:34.129913    4967 fix.go:56] fixHost completed within 22.730958ms
	I0911 04:14:34.130322    4967 start.go:83] releasing machines lock for "default-k8s-diff-port-405000", held for 23.258ms
	W0911 04:14:34.130489    4967 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-405000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-405000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:14:34.138325    4967 out.go:177] 
	W0911 04:14:34.141450    4967 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:14:34.141469    4967 out.go:239] * 
	* 
	W0911 04:14:34.143307    4967 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:14:34.151144    4967 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-405000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-405000 -n default-k8s-diff-port-405000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-405000 -n default-k8s-diff-port-405000: exit status 7 (63.977375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-405000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-846000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-846000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1: exit status 80 (5.171957125s)

                                                
                                                
-- stdout --
	* [newest-cni-846000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-846000 in cluster newest-cni-846000
	* Restarting existing qemu2 VM for "newest-cni-846000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-846000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:14:33.663068    4988 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:14:33.663179    4988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:33.663182    4988 out.go:309] Setting ErrFile to fd 2...
	I0911 04:14:33.663184    4988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:33.663299    4988 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:14:33.664230    4988 out.go:303] Setting JSON to false
	I0911 04:14:33.679256    4988 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2647,"bootTime":1694428226,"procs":416,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 04:14:33.679321    4988 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 04:14:33.682741    4988 out.go:177] * [newest-cni-846000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 04:14:33.688643    4988 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 04:14:33.688728    4988 notify.go:220] Checking for updates...
	I0911 04:14:33.692642    4988 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 04:14:33.695506    4988 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 04:14:33.698660    4988 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 04:14:33.701651    4988 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 04:14:33.702987    4988 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 04:14:33.705880    4988 config.go:182] Loaded profile config "newest-cni-846000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:14:33.706103    4988 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 04:14:33.710621    4988 out.go:177] * Using the qemu2 driver based on existing profile
	I0911 04:14:33.716546    4988 start.go:298] selected driver: qemu2
	I0911 04:14:33.716552    4988 start.go:902] validating driver "qemu2" against &{Name:newest-cni-846000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-846000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:14:33.716602    4988 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 04:14:33.718591    4988 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0911 04:14:33.718616    4988 cni.go:84] Creating CNI manager for ""
	I0911 04:14:33.718623    4988 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 04:14:33.718628    4988 start_flags.go:321] config:
	{Name:newest-cni-846000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-846000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 04:14:33.722627    4988 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 04:14:33.726633    4988 out.go:177] * Starting control plane node newest-cni-846000 in cluster newest-cni-846000
	I0911 04:14:33.734603    4988 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 04:14:33.734634    4988 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 04:14:33.734646    4988 cache.go:57] Caching tarball of preloaded images
	I0911 04:14:33.734722    4988 preload.go:174] Found /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0911 04:14:33.734727    4988 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on docker
	I0911 04:14:33.734790    4988 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/newest-cni-846000/config.json ...
	I0911 04:14:33.735145    4988 start.go:365] acquiring machines lock for newest-cni-846000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:14:33.735171    4988 start.go:369] acquired machines lock for "newest-cni-846000" in 19.75µs
	I0911 04:14:33.735181    4988 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:14:33.735185    4988 fix.go:54] fixHost starting: 
	I0911 04:14:33.735297    4988 fix.go:102] recreateIfNeeded on newest-cni-846000: state=Stopped err=<nil>
	W0911 04:14:33.735305    4988 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:14:33.739627    4988 out.go:177] * Restarting existing qemu2 VM for "newest-cni-846000" ...
	I0911 04:14:33.747686    4988 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:25:7c:df:fd:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/disk.qcow2
	I0911 04:14:33.749635    4988 main.go:141] libmachine: STDOUT: 
	I0911 04:14:33.749652    4988 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:14:33.749683    4988 fix.go:56] fixHost completed within 14.497041ms
	I0911 04:14:33.749689    4988 start.go:83] releasing machines lock for "newest-cni-846000", held for 14.5145ms
	W0911 04:14:33.749695    4988 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:14:33.749729    4988 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:14:33.749732    4988 start.go:687] Will try again in 5 seconds ...
	I0911 04:14:38.751727    4988 start.go:365] acquiring machines lock for newest-cni-846000: {Name:mk13c4e6e8f76dc95ba49f351b9cceb185f93037 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 04:14:38.752452    4988 start.go:369] acquired machines lock for "newest-cni-846000" in 529.25µs
	I0911 04:14:38.752665    4988 start.go:96] Skipping create...Using existing machine configuration
	I0911 04:14:38.752681    4988 fix.go:54] fixHost starting: 
	I0911 04:14:38.753500    4988 fix.go:102] recreateIfNeeded on newest-cni-846000: state=Stopped err=<nil>
	W0911 04:14:38.753532    4988 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 04:14:38.756771    4988 out.go:177] * Restarting existing qemu2 VM for "newest-cni-846000" ...
	I0911 04:14:38.764013    4988 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:25:7c:df:fd:cd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17223-1124/.minikube/machines/newest-cni-846000/disk.qcow2
	I0911 04:14:38.773561    4988 main.go:141] libmachine: STDOUT: 
	I0911 04:14:38.773644    4988 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0911 04:14:38.773767    4988 fix.go:56] fixHost completed within 21.084209ms
	I0911 04:14:38.773793    4988 start.go:83] releasing machines lock for "newest-cni-846000", held for 21.312542ms
	W0911 04:14:38.774076    4988 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-846000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-846000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0911 04:14:38.782803    4988 out.go:177] 
	W0911 04:14:38.785948    4988 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0911 04:14:38.785988    4988 out.go:239] * 
	* 
	W0911 04:14:38.788618    4988 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 04:14:38.795888    4988 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-846000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.28.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-846000 -n newest-cni-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-846000 -n newest-cni-846000: exit status 7 (66.908125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-846000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-405000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-405000 -n default-k8s-diff-port-405000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-405000 -n default-k8s-diff-port-405000: exit status 7 (31.243417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-405000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-405000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-405000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-405000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.567125ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-405000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-405000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-405000 -n default-k8s-diff-port-405000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-405000 -n default-k8s-diff-port-405000: exit status 7 (28.446792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-405000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-405000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-405000 "sudo crictl images -o json": exit status 89 (40.435417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-405000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-405000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-405000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-405000 -n default-k8s-diff-port-405000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-405000 -n default-k8s-diff-port-405000: exit status 7 (28.383417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-405000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-405000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-405000 --alsologtostderr -v=1: exit status 89 (39.294042ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-405000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:14:34.410937    5007 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:14:34.411096    5007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:34.411099    5007 out.go:309] Setting ErrFile to fd 2...
	I0911 04:14:34.411101    5007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:34.411238    5007 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:14:34.411446    5007 out.go:303] Setting JSON to false
	I0911 04:14:34.411454    5007 mustload.go:65] Loading cluster: default-k8s-diff-port-405000
	I0911 04:14:34.411631    5007 config.go:182] Loaded profile config "default-k8s-diff-port-405000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:14:34.415050    5007 out.go:177] * The control plane node must be running for this command
	I0911 04:14:34.419028    5007 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-405000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-405000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-405000 -n default-k8s-diff-port-405000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-405000 -n default-k8s-diff-port-405000: exit status 7 (28.032583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-405000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-405000 -n default-k8s-diff-port-405000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-405000 -n default-k8s-diff-port-405000: exit status 7 (27.903125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-405000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-846000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-846000 "sudo crictl images -o json": exit status 89 (43.239708ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-846000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-846000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-846000"
start_stop_delete_test.go:304: v1.28.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.1",
- 	"registry.k8s.io/kube-controller-manager:v1.28.1",
- 	"registry.k8s.io/kube-proxy:v1.28.1",
- 	"registry.k8s.io/kube-scheduler:v1.28.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-846000 -n newest-cni-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-846000 -n newest-cni-846000: exit status 7 (29.221958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-846000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-846000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-846000 --alsologtostderr -v=1: exit status 89 (40.334625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-846000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 04:14:38.979728    5037 out.go:296] Setting OutFile to fd 1 ...
	I0911 04:14:38.979853    5037 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:38.979868    5037 out.go:309] Setting ErrFile to fd 2...
	I0911 04:14:38.979871    5037 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 04:14:38.979992    5037 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 04:14:38.980224    5037 out.go:303] Setting JSON to false
	I0911 04:14:38.980233    5037 mustload.go:65] Loading cluster: newest-cni-846000
	I0911 04:14:38.980407    5037 config.go:182] Loaded profile config "newest-cni-846000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 04:14:38.983444    5037 out.go:177] * The control plane node must be running for this command
	I0911 04:14:38.987503    5037 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-846000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-846000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-846000 -n newest-cni-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-846000 -n newest-cni-846000: exit status 7 (29.287833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-846000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-846000 -n newest-cni-846000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-846000 -n newest-cni-846000: exit status 7 (29.331458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-846000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (135/244)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.28.1/json-events 9.14
11 TestDownloadOnly/v1.28.1/preload-exists 0
14 TestDownloadOnly/v1.28.1/kubectl 0
15 TestDownloadOnly/v1.28.1/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.23
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.23
19 TestBinaryMirror 0.34
30 TestHyperKitDriverInstallOrUpdate 8.32
33 TestErrorSpam/setup 28.91
34 TestErrorSpam/start 0.34
35 TestErrorSpam/status 0.25
36 TestErrorSpam/pause 0.65
37 TestErrorSpam/unpause 0.63
38 TestErrorSpam/stop 3.24
41 TestFunctional/serial/CopySyncFile 0
42 TestFunctional/serial/StartWithProxy 45.44
43 TestFunctional/serial/AuditLog 0
44 TestFunctional/serial/SoftStart 33.7
45 TestFunctional/serial/KubeContext 0.03
46 TestFunctional/serial/KubectlGetPods 0.05
49 TestFunctional/serial/CacheCmd/cache/add_remote 3.54
50 TestFunctional/serial/CacheCmd/cache/add_local 1.39
51 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
52 TestFunctional/serial/CacheCmd/cache/list 0.03
53 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
54 TestFunctional/serial/CacheCmd/cache/cache_reload 0.91
55 TestFunctional/serial/CacheCmd/cache/delete 0.07
56 TestFunctional/serial/MinikubeKubectlCmd 0.41
57 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.54
58 TestFunctional/serial/ExtraConfig 36.93
59 TestFunctional/serial/ComponentHealth 0.04
60 TestFunctional/serial/LogsCmd 0.63
61 TestFunctional/serial/LogsFileCmd 0.59
62 TestFunctional/serial/InvalidService 3.64
64 TestFunctional/parallel/ConfigCmd 0.23
65 TestFunctional/parallel/DashboardCmd 13.17
66 TestFunctional/parallel/DryRun 0.21
67 TestFunctional/parallel/InternationalLanguage 0.11
68 TestFunctional/parallel/StatusCmd 0.26
73 TestFunctional/parallel/AddonsCmd 0.12
74 TestFunctional/parallel/PersistentVolumeClaim 24.94
77 TestFunctional/parallel/CpCmd 0.3
79 TestFunctional/parallel/FileSync 0.08
80 TestFunctional/parallel/CertSync 0.44
84 TestFunctional/parallel/NodeLabels 0.04
86 TestFunctional/parallel/NonActiveRuntimeDisabled 0.08
88 TestFunctional/parallel/License 0.2
90 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
91 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
93 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.14
94 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
95 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
96 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
97 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
98 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
99 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
100 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
101 TestFunctional/parallel/ServiceCmd/List 0.29
102 TestFunctional/parallel/ServiceCmd/JSONOutput 0.28
103 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
104 TestFunctional/parallel/ServiceCmd/Format 0.11
105 TestFunctional/parallel/ServiceCmd/URL 0.11
106 TestFunctional/parallel/ProfileCmd/profile_not_create 0.19
107 TestFunctional/parallel/ProfileCmd/profile_list 0.15
108 TestFunctional/parallel/ProfileCmd/profile_json_output 0.15
109 TestFunctional/parallel/MountCmd/any-port 5.41
112 TestFunctional/parallel/Version/short 0.05
113 TestFunctional/parallel/Version/components 0.21
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.09
118 TestFunctional/parallel/ImageCommands/ImageBuild 1.69
119 TestFunctional/parallel/ImageCommands/Setup 1.56
120 TestFunctional/parallel/DockerEnv/bash 0.41
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.07
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.23
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.56
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.55
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.55
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.17
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.61
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.69
131 TestFunctional/delete_addon-resizer_images 0.12
132 TestFunctional/delete_my-image_image 0.04
133 TestFunctional/delete_minikube_cached_images 0.04
137 TestImageBuild/serial/Setup 30.21
138 TestImageBuild/serial/NormalBuild 1.21
140 TestImageBuild/serial/BuildWithDockerIgnore 0.12
141 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.1
144 TestIngressAddonLegacy/StartLegacyK8sCluster 96.48
146 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 12.34
147 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.22
151 TestJSONOutput/start/Command 84.57
152 TestJSONOutput/start/Audit 0
154 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
155 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
157 TestJSONOutput/pause/Command 0.25
158 TestJSONOutput/pause/Audit 0
160 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/unpause/Command 0.2
164 TestJSONOutput/unpause/Audit 0
166 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/stop/Command 9.08
170 TestJSONOutput/stop/Audit 0
172 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
174 TestErrorJSONOutput 0.33
179 TestMainNoArgs 0.03
180 TestMinikubeProfile 61.24
236 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
240 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
241 TestNoKubernetes/serial/ProfileList 0.14
242 TestNoKubernetes/serial/Stop 0.06
244 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
259 TestStartStop/group/old-k8s-version/serial/Stop 0.06
260 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
273 TestStartStop/group/no-preload/serial/Stop 0.06
274 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.09
278 TestStartStop/group/embed-certs/serial/Stop 0.06
279 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
295 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
296 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
298 TestStartStop/group/newest-cni/serial/DeployApp 0
299 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
300 TestStartStop/group/newest-cni/serial/Stop 0.06
301 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
307 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
308 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-412000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-412000: exit status 85 (92.058584ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-412000 | jenkins | v1.31.2 | 11 Sep 23 03:53 PDT |          |
	|         | -p download-only-412000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 03:53:37
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 03:53:37.558047    1567 out.go:296] Setting OutFile to fd 1 ...
	I0911 03:53:37.558188    1567 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:53:37.558191    1567 out.go:309] Setting ErrFile to fd 2...
	I0911 03:53:37.558193    1567 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:53:37.558298    1567 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	W0911 03:53:37.558368    1567 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17223-1124/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17223-1124/.minikube/config/config.json: no such file or directory
	I0911 03:53:37.559541    1567 out.go:303] Setting JSON to true
	I0911 03:53:37.575956    1567 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1391,"bootTime":1694428226,"procs":422,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 03:53:37.576032    1567 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 03:53:37.584502    1567 out.go:97] [download-only-412000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 03:53:37.588453    1567 out.go:169] MINIKUBE_LOCATION=17223
	W0911 03:53:37.584644    1567 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball: no such file or directory
	I0911 03:53:37.584679    1567 notify.go:220] Checking for updates...
	I0911 03:53:37.598457    1567 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 03:53:37.601486    1567 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 03:53:37.604474    1567 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 03:53:37.607477    1567 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	W0911 03:53:37.611988    1567 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0911 03:53:37.612224    1567 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 03:53:37.617433    1567 out.go:97] Using the qemu2 driver based on user configuration
	I0911 03:53:37.617438    1567 start.go:298] selected driver: qemu2
	I0911 03:53:37.617450    1567 start.go:902] validating driver "qemu2" against <nil>
	I0911 03:53:37.617492    1567 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 03:53:37.620383    1567 out.go:169] Automatically selected the socket_vmnet network
	I0911 03:53:37.626920    1567 start_flags.go:384] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0911 03:53:37.626999    1567 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0911 03:53:37.627083    1567 cni.go:84] Creating CNI manager for ""
	I0911 03:53:37.627098    1567 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0911 03:53:37.627103    1567 start_flags.go:321] config:
	{Name:download-only-412000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-412000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 03:53:37.632211    1567 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 03:53:37.636610    1567 out.go:97] Downloading VM boot image ...
	I0911 03:53:37.636746    1567 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/iso/arm64/minikube-v1.31.0-1692872107-17120-arm64.iso
	I0911 03:53:43.130618    1567 out.go:97] Starting control plane node download-only-412000 in cluster download-only-412000
	I0911 03:53:43.130642    1567 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0911 03:53:43.185800    1567 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0911 03:53:43.185889    1567 cache.go:57] Caching tarball of preloaded images
	I0911 03:53:43.186066    1567 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0911 03:53:43.189662    1567 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0911 03:53:43.189671    1567 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0911 03:53:43.270213    1567 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0911 03:53:50.843206    1567 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0911 03:53:50.843359    1567 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0911 03:53:51.485758    1567 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0911 03:53:51.485950    1567 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/download-only-412000/config.json ...
	I0911 03:53:51.485972    1567 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/download-only-412000/config.json: {Name:mk93908f6e70cc7147706f6edb9295b5967f3765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 03:53:51.486204    1567 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0911 03:53:51.486379    1567 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0911 03:53:51.834950    1567 out.go:169] 
	W0911 03:53:51.841140    1567 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17223-1124/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10630df68 0x10630df68 0x10630df68 0x10630df68 0x10630df68 0x10630df68 0x10630df68] Decompressors:map[bz2:0x14000057da8 gz:0x14000057e00 tar:0x14000057db0 tar.bz2:0x14000057dc0 tar.gz:0x14000057dd0 tar.xz:0x14000057de0 tar.zst:0x14000057df0 tbz2:0x14000057dc0 tgz:0x14000057dd0 txz:0x14000057de0 tzst:0x14000057df0 xz:0x14000057e08 zip:0x14000057e10 zst:0x14000057e20] Getters:map[file:0x1400019eca0 http:0x14000f02140 https:0x14000f02190] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0911 03:53:51.841165    1567 out_reason.go:110] 
	W0911 03:53:51.848024    1567 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 03:53:51.852024    1567 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-412000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/json-events (9.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-412000 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-412000 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=docker --driver=qemu2 : (9.139423375s)
--- PASS: TestDownloadOnly/v1.28.1/json-events (9.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/kubectl
--- PASS: TestDownloadOnly/v1.28.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-412000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-412000: exit status 85 (75.686542ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-412000 | jenkins | v1.31.2 | 11 Sep 23 03:53 PDT |          |
	|         | -p download-only-412000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-412000 | jenkins | v1.31.2 | 11 Sep 23 03:53 PDT |          |
	|         | -p download-only-412000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 03:53:52
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.7 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 03:53:52.041833    1579 out.go:296] Setting OutFile to fd 1 ...
	I0911 03:53:52.041960    1579 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:53:52.041963    1579 out.go:309] Setting ErrFile to fd 2...
	I0911 03:53:52.041965    1579 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:53:52.042085    1579 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	W0911 03:53:52.042151    1579 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17223-1124/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17223-1124/.minikube/config/config.json: no such file or directory
	I0911 03:53:52.043117    1579 out.go:303] Setting JSON to true
	I0911 03:53:52.058368    1579 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1406,"bootTime":1694428226,"procs":421,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 03:53:52.058444    1579 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 03:53:52.063133    1579 out.go:97] [download-only-412000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 03:53:52.066993    1579 out.go:169] MINIKUBE_LOCATION=17223
	I0911 03:53:52.063260    1579 notify.go:220] Checking for updates...
	I0911 03:53:52.072994    1579 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 03:53:52.076059    1579 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 03:53:52.079079    1579 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 03:53:52.080591    1579 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	W0911 03:53:52.087036    1579 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0911 03:53:52.087354    1579 config.go:182] Loaded profile config "download-only-412000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0911 03:53:52.087381    1579 start.go:810] api.Load failed for download-only-412000: filestore "download-only-412000": Docker machine "download-only-412000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0911 03:53:52.087418    1579 driver.go:373] Setting default libvirt URI to qemu:///system
	W0911 03:53:52.087429    1579 start.go:810] api.Load failed for download-only-412000: filestore "download-only-412000": Docker machine "download-only-412000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0911 03:53:52.091048    1579 out.go:97] Using the qemu2 driver based on existing profile
	I0911 03:53:52.091055    1579 start.go:298] selected driver: qemu2
	I0911 03:53:52.091058    1579 start.go:902] validating driver "qemu2" against &{Name:download-only-412000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-412000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 03:53:52.092967    1579 cni.go:84] Creating CNI manager for ""
	I0911 03:53:52.092980    1579 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0911 03:53:52.092996    1579 start_flags.go:321] config:
	{Name:download-only-412000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:download-only-412000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 03:53:52.096719    1579 iso.go:125] acquiring lock: {Name:mk93ecfb1efa8aa22d56a7ab316dc777d0c1a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 03:53:52.100058    1579 out.go:97] Starting control plane node download-only-412000 in cluster download-only-412000
	I0911 03:53:52.100067    1579 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 03:53:52.159093    1579 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	I0911 03:53:52.159120    1579 cache.go:57] Caching tarball of preloaded images
	I0911 03:53:52.159268    1579 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime docker
	I0911 03:53:52.164429    1579 out.go:97] Downloading Kubernetes v1.28.1 preload ...
	I0911 03:53:52.164437    1579 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4 ...
	I0911 03:53:52.240463    1579 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.1/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4?checksum=md5:014fa2c9750ed18a91c50dffb6ed7aeb -> /Users/jenkins/minikube-integration/17223-1124/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-docker-overlay2-arm64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-412000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-412000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.34s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-975000 --alsologtostderr --binary-mirror http://127.0.0.1:49391 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-975000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-975000
--- PASS: TestBinaryMirror (0.34s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.32s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.32s)

                                                
                                    
x
+
TestErrorSpam/setup (28.91s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-298000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-298000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-298000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-298000 --driver=qemu2 : (28.910043042s)
--- PASS: TestErrorSpam/setup (28.91s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-298000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-298000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-298000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-298000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-298000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-298000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-298000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-298000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-298000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-298000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-298000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-298000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-298000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-298000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-298000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-298000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-298000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-298000 pause
--- PASS: TestErrorSpam/pause (0.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-298000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-298000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-298000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-298000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-298000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-298000 unpause
--- PASS: TestErrorSpam/unpause (0.63s)

                                                
                                    
x
+
TestErrorSpam/stop (3.24s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-298000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-298000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-298000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-298000 stop: (3.070928041s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-298000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-298000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-298000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-298000 stop
--- PASS: TestErrorSpam/stop (3.24s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17223-1124/.minikube/files/etc/test/nested/copy/1565/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (45.44s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-740000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-740000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (45.437331917s)
--- PASS: TestFunctional/serial/StartWithProxy (45.44s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (33.7s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-740000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-740000 --alsologtostderr -v=8: (33.70390525s)
functional_test.go:659: soft start took 33.704310459s for "functional-740000" cluster.
--- PASS: TestFunctional/serial/SoftStart (33.70s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-740000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-740000 cache add registry.k8s.io/pause:3.1: (1.206403875s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-740000 cache add registry.k8s.io/pause:3.3: (1.186737042s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-740000 cache add registry.k8s.io/pause:latest: (1.143677875s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-740000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local1252200694/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 cache add minikube-local-cache-test:functional-740000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-arm64 -p functional-740000 cache add minikube-local-cache-test:functional-740000: (1.064596958s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 cache delete minikube-local-cache-test:functional-740000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-740000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-740000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (73.44725ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 kubectl -- --context functional-740000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.41s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.54s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-740000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.54s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.93s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-740000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-740000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.925390375s)
functional_test.go:757: restart took 36.925496916s for "functional-740000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.93s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-740000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.63s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd2447844655/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.59s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.64s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-740000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-740000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-740000: exit status 115 (112.525125ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:32090 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-740000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.64s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-740000 config get cpus: exit status 14 (29.815292ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-740000 config get cpus: exit status 14 (29.759792ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-740000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-740000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2143: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.17s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-740000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-740000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.887042ms)

                                                
                                                
-- stdout --
	* [functional-740000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 03:58:23.262069    2126 out.go:296] Setting OutFile to fd 1 ...
	I0911 03:58:23.262184    2126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:58:23.262187    2126 out.go:309] Setting ErrFile to fd 2...
	I0911 03:58:23.262189    2126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:58:23.262309    2126 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 03:58:23.263262    2126 out.go:303] Setting JSON to false
	I0911 03:58:23.278848    2126 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1677,"bootTime":1694428226,"procs":415,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 03:58:23.278914    2126 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 03:58:23.283979    2126 out.go:177] * [functional-740000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	I0911 03:58:23.291028    2126 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 03:58:23.294997    2126 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 03:58:23.291038    2126 notify.go:220] Checking for updates...
	I0911 03:58:23.300869    2126 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 03:58:23.303968    2126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 03:58:23.307089    2126 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 03:58:23.309978    2126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 03:58:23.313180    2126 config.go:182] Loaded profile config "functional-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 03:58:23.313436    2126 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 03:58:23.317975    2126 out.go:177] * Using the qemu2 driver based on existing profile
	I0911 03:58:23.324932    2126 start.go:298] selected driver: qemu2
	I0911 03:58:23.324937    2126 start.go:902] validating driver "qemu2" against &{Name:functional-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:functional-740000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 03:58:23.324988    2126 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 03:58:23.330965    2126 out.go:177] 
	W0911 03:58:23.334927    2126 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0911 03:58:23.338941    2126 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-740000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-740000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-740000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.436917ms)

                                                
                                                
-- stdout --
	* [functional-740000] minikube v1.31.2 sur Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 03:58:23.147390    2122 out.go:296] Setting OutFile to fd 1 ...
	I0911 03:58:23.147498    2122 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:58:23.147502    2122 out.go:309] Setting ErrFile to fd 2...
	I0911 03:58:23.147504    2122 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 03:58:23.147629    2122 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
	I0911 03:58:23.149060    2122 out.go:303] Setting JSON to false
	I0911 03:58:23.166565    2122 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1677,"bootTime":1694428226,"procs":415,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.5.2","kernelVersion":"22.6.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0911 03:58:23.166643    2122 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0911 03:58:23.171058    2122 out.go:177] * [functional-740000] minikube v1.31.2 sur Darwin 13.5.2 (arm64)
	I0911 03:58:23.179047    2122 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 03:58:23.182964    2122 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	I0911 03:58:23.179184    2122 notify.go:220] Checking for updates...
	I0911 03:58:23.189980    2122 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0911 03:58:23.192995    2122 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 03:58:23.195959    2122 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	I0911 03:58:23.199005    2122 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 03:58:23.202267    2122 config.go:182] Loaded profile config "functional-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
	I0911 03:58:23.202531    2122 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 03:58:23.207009    2122 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0911 03:58:23.212907    2122 start.go:298] selected driver: qemu2
	I0911 03:58:23.212912    2122 start.go:902] validating driver "qemu2" against &{Name:functional-740000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.1 ClusterName:functional-740000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 03:58:23.212965    2122 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 03:58:23.218946    2122 out.go:177] 
	W0911 03:58:23.222996    2122 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0911 03:58:23.226917    2122 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [bb69cc6c-d468-4340-92f4-8386dbe0fa68] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006057125s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-740000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-740000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-740000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-740000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [159354dd-cc26-4e06-9f8a-2eea2299aade] Pending
helpers_test.go:344: "sp-pod" [159354dd-cc26-4e06-9f8a-2eea2299aade] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [159354dd-cc26-4e06-9f8a-2eea2299aade] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.008490375s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-740000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-740000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-740000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d56f7485-867d-41b9-aa97-61fd990e8f2d] Pending
helpers_test.go:344: "sp-pod" [d56f7485-867d-41b9-aa97-61fd990e8f2d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d56f7485-867d-41b9-aa97-61fd990e8f2d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.0067565s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-740000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.94s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh -n functional-740000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 cp functional-740000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd433482928/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh -n functional-740000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1565/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "sudo cat /etc/test/nested/copy/1565/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1565.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "sudo cat /etc/ssl/certs/1565.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1565.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "sudo cat /usr/share/ca-certificates/1565.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/15652.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "sudo cat /etc/ssl/certs/15652.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/15652.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "sudo cat /usr/share/ca-certificates/15652.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-740000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-740000 ssh "sudo systemctl is-active crio": exit status 1 (79.054417ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-740000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-740000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-740000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-740000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1959: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-740000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-740000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [fd4f18bd-ef4f-4b9f-894b-bfa926cb8358] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [fd4f18bd-ef4f-4b9f-894b-bfa926cb8358] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.014012291s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-740000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.96.178 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-740000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-740000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-740000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-r2wpj" [d75329d7-a3e4-4016-b10a-f1b4fb538f6a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-r2wpj" [d75329d7-a3e4-4016-b10a-f1b4fb538f6a] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.0082525s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 service list -o json
functional_test.go:1493: Took "284.699334ms" to run "out/minikube-darwin-arm64 -p functional-740000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.105.4:32107
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.105.4:32107
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1314: Took "117.943875ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1328: Took "33.620208ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1365: Took "115.429ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1378: Took "34.630375ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-740000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2075074336/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1694429884627191000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2075074336/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1694429884627191000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2075074336/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1694429884627191000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2075074336/001/test-1694429884627191000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (64.363875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 11 10:58 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 11 10:58 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 11 10:58 test-1694429884627191000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh cat /mount-9p/test-1694429884627191000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-740000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [bf77116c-80c0-46a1-a227-0343477e1125] Pending
helpers_test.go:344: "busybox-mount" [bf77116c-80c0-46a1-a227-0343477e1125] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [bf77116c-80c0-46a1-a227-0343477e1125] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [bf77116c-80c0-46a1-a227-0343477e1125] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.00820925s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-740000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-740000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2075074336/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.41s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-740000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-740000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-740000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-740000 image ls --format short --alsologtostderr:
I0911 03:58:47.836614    2314 out.go:296] Setting OutFile to fd 1 ...
I0911 03:58:47.836771    2314 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 03:58:47.836774    2314 out.go:309] Setting ErrFile to fd 2...
I0911 03:58:47.836776    2314 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 03:58:47.836894    2314 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
I0911 03:58:47.837259    2314 config.go:182] Loaded profile config "functional-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0911 03:58:47.837320    2314 config.go:182] Loaded profile config "functional-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0911 03:58:47.838510    2314 ssh_runner.go:195] Run: systemctl --version
I0911 03:58:47.838518    2314 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/functional-740000/id_rsa Username:docker}
I0911 03:58:47.871041    2314 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-740000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.28.1           | b4a5a57e99492 | 57.8MB |
| registry.k8s.io/kube-proxy                  | v1.28.1           | 812f5241df7fd | 68.3MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/kube-controller-manager     | v1.28.1           | 8b6e1980b7584 | 116MB  |
| gcr.io/google-containers/addon-resizer      | functional-740000 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| docker.io/library/nginx                     | latest            | 91582cfffc2d0 | 192MB  |
| docker.io/library/nginx                     | alpine            | fa0c6bb795403 | 43.4MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-740000 | 0349aebacd338 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.28.1           | b29fb62480892 | 119MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-740000 image ls --format table --alsologtostderr:
I0911 03:58:47.997586    2322 out.go:296] Setting OutFile to fd 1 ...
I0911 03:58:47.997702    2322 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 03:58:47.997706    2322 out.go:309] Setting ErrFile to fd 2...
I0911 03:58:47.997709    2322 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 03:58:47.997838    2322 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
I0911 03:58:47.998220    2322 config.go:182] Loaded profile config "functional-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0911 03:58:47.998283    2322 config.go:182] Loaded profile config "functional-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0911 03:58:47.999021    2322 ssh_runner.go:195] Run: systemctl --version
I0911 03:58:47.999029    2322 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/functional-740000/id_rsa Username:docker}
I0911 03:58:48.033252    2322 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-740000 image ls --format json --alsologtostderr:
[{"id":"91582cfffc2d0daa6f42adb6fb74665a047310f76a28e9ed5b0185a2d0f362a6","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-740000"],"size":"32900000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"0349aebacd338bce892b9d899dbd8c2c6c82784973aef731ad259d24950c78a2","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-740000"],"size":"30"},{"id":"fa0c6bb79540
3f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43400000"},{"id":"b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.1"],"size":"119000000"},{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.1"],"size":"68300000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags
":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.1"],"size":"116000000"},{"id":"b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.1"],"size":"57800000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm
:1.8"],"size":"85000000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-740000 image ls --format json --alsologtostderr:
I0911 03:58:47.921434    2318 out.go:296] Setting OutFile to fd 1 ...
I0911 03:58:47.921561    2318 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 03:58:47.921565    2318 out.go:309] Setting ErrFile to fd 2...
I0911 03:58:47.921567    2318 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 03:58:47.921690    2318 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
I0911 03:58:47.922107    2318 config.go:182] Loaded profile config "functional-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0911 03:58:47.922174    2318 config.go:182] Loaded profile config "functional-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0911 03:58:47.922977    2318 ssh_runner.go:195] Run: systemctl --version
I0911 03:58:47.922987    2318 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/functional-740000/id_rsa Username:docker}
I0911 03:58:47.955018    2318 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-740000 image ls --format yaml --alsologtostderr:
- id: b29fb62480892633ac479243b9841b88f9ae30865773fd76b97522541cd5633a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.1
size: "119000000"
- id: b4a5a57e994924bffc4556da6c6c39d27ebaf593155983163d0b2367037bcb87
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.1
size: "57800000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-740000
size: "32900000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 0349aebacd338bce892b9d899dbd8c2c6c82784973aef731ad259d24950c78a2
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-740000
size: "30"
- id: fa0c6bb795403f8762e5cbf7b9f395aa036e7bd61c707485c1968b79bb3da9f1
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43400000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8b6e1980b7584ebf92ee961322982c26a525c4e4e2181e037b8854697be71965
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.1
size: "116000000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 91582cfffc2d0daa6f42adb6fb74665a047310f76a28e9ed5b0185a2d0f362a6
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 812f5241df7fd64adb98d461bd6259a825a371fb3b2d5258752579380bc39c26
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.1
size: "68300000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-740000 image ls --format yaml --alsologtostderr:
I0911 03:58:47.836438    2313 out.go:296] Setting OutFile to fd 1 ...
I0911 03:58:47.836758    2313 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 03:58:47.836762    2313 out.go:309] Setting ErrFile to fd 2...
I0911 03:58:47.836765    2313 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 03:58:47.836887    2313 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
I0911 03:58:47.837272    2313 config.go:182] Loaded profile config "functional-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0911 03:58:47.837329    2313 config.go:182] Loaded profile config "functional-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0911 03:58:47.838133    2313 ssh_runner.go:195] Run: systemctl --version
I0911 03:58:47.838144    2313 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/functional-740000/id_rsa Username:docker}
I0911 03:58:47.871084    2313 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
W0911 03:58:47.888885    2313 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 57fb6a5a-8f59-4279-8b6c-8494f442a830
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-740000 ssh pgrep buildkitd: exit status 1 (67.648ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 image build -t localhost/my-image:functional-740000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-740000 image build -t localhost/my-image:functional-740000 testdata/build --alsologtostderr: (1.542953083s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-740000 image build -t localhost/my-image:functional-740000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in 6fc2faf0c002
Removing intermediate container 6fc2faf0c002
---> 8d4e4962734f
Step 3/3 : ADD content.txt /
---> 9001035730b4
Successfully built 9001035730b4
Successfully tagged localhost/my-image:functional-740000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-740000 image build -t localhost/my-image:functional-740000 testdata/build --alsologtostderr:
I0911 03:58:47.987338    2321 out.go:296] Setting OutFile to fd 1 ...
I0911 03:58:47.987553    2321 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 03:58:47.987559    2321 out.go:309] Setting ErrFile to fd 2...
I0911 03:58:47.987561    2321 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 03:58:47.987701    2321 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17223-1124/.minikube/bin
I0911 03:58:47.988125    2321 config.go:182] Loaded profile config "functional-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0911 03:58:47.988547    2321 config.go:182] Loaded profile config "functional-740000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.1
I0911 03:58:47.989385    2321 ssh_runner.go:195] Run: systemctl --version
I0911 03:58:47.989396    2321 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17223-1124/.minikube/machines/functional-740000/id_rsa Username:docker}
I0911 03:58:48.021994    2321 build_images.go:151] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2299693391.tar
I0911 03:58:48.022058    2321 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0911 03:58:48.024883    2321 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2299693391.tar
I0911 03:58:48.027715    2321 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2299693391.tar: stat -c "%s %y" /var/lib/minikube/build/build.2299693391.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2299693391.tar': No such file or directory
I0911 03:58:48.027740    2321 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2299693391.tar --> /var/lib/minikube/build/build.2299693391.tar (3072 bytes)
I0911 03:58:48.035530    2321 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2299693391
I0911 03:58:48.044092    2321 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2299693391 -xf /var/lib/minikube/build/build.2299693391.tar
I0911 03:58:48.046921    2321 docker.go:339] Building image: /var/lib/minikube/build/build.2299693391
I0911 03:58:48.046966    2321 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-740000 /var/lib/minikube/build/build.2299693391
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0911 03:58:49.487779    2321 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-740000 /var/lib/minikube/build/build.2299693391: (1.44083475s)
I0911 03:58:49.487861    2321 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2299693391
I0911 03:58:49.491095    2321 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2299693391.tar
I0911 03:58:49.493777    2321 build_images.go:207] Built localhost/my-image:functional-740000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2299693391.tar
I0911 03:58:49.493790    2321 build_images.go:123] succeeded building to: functional-740000
I0911 03:58:49.493792    2321 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.505210209s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-740000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-740000 docker-env) && out/minikube-darwin-arm64 status -p functional-740000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-740000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 image load --daemon gcr.io/google-containers/addon-resizer:functional-740000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-740000 image load --daemon gcr.io/google-containers/addon-resizer:functional-740000 --alsologtostderr: (2.150895792s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 image load --daemon gcr.io/google-containers/addon-resizer:functional-740000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-740000 image load --daemon gcr.io/google-containers/addon-resizer:functional-740000 --alsologtostderr: (1.480660375s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.373418958s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-740000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 image load --daemon gcr.io/google-containers/addon-resizer:functional-740000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-740000 image load --daemon gcr.io/google-containers/addon-resizer:functional-740000 --alsologtostderr: (2.053606875s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 image save gcr.io/google-containers/addon-resizer:functional-740000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 image rm gcr.io/google-containers/addon-resizer:functional-740000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-740000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 image save --daemon gcr.io/google-containers/addon-resizer:functional-740000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-740000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.69s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-740000
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-740000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-740000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (30.21s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-012000 --driver=qemu2 
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-012000 --driver=qemu2 : (30.208282375s)
--- PASS: TestImageBuild/serial/Setup (30.21s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.21s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-012000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-012000: (1.214086792s)
--- PASS: TestImageBuild/serial/NormalBuild (1.21s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.12s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-012000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.12s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-012000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (96.48s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-937000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-937000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m36.480541417s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (96.48s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.34s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-937000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-937000 addons enable ingress --alsologtostderr -v=5: (12.339049709s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.34s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.22s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-937000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.22s)

                                                
                                    
x
+
TestJSONOutput/start/Command (84.57s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-021000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E0911 04:02:31.394419    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/client.crt: no such file or directory
E0911 04:02:31.401325    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/client.crt: no such file or directory
E0911 04:02:31.411475    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/client.crt: no such file or directory
E0911 04:02:31.433499    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/client.crt: no such file or directory
E0911 04:02:31.475538    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/client.crt: no such file or directory
E0911 04:02:31.557344    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/client.crt: no such file or directory
E0911 04:02:31.719385    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/client.crt: no such file or directory
E0911 04:02:32.041426    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/client.crt: no such file or directory
E0911 04:02:32.683583    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/client.crt: no such file or directory
E0911 04:02:33.965659    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/client.crt: no such file or directory
E0911 04:02:36.527417    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/client.crt: no such file or directory
E0911 04:02:41.649408    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/client.crt: no such file or directory
E0911 04:02:51.891278    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/client.crt: no such file or directory
E0911 04:03:12.373279    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-021000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (1m24.568583834s)
--- PASS: TestJSONOutput/start/Command (84.57s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.25s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-021000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.25s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.2s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-021000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.20s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (9.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-021000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-021000 --output=json --user=testUser: (9.0748435s)
--- PASS: TestJSONOutput/stop/Command (9.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-739000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-739000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (88.64475ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d7d13f79-e108-44ef-b697-3a83e3965804","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-739000] minikube v1.31.2 on Darwin 13.5.2 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"da0b599d-f46e-46db-a89d-e4c2f75924ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17223"}}
	{"specversion":"1.0","id":"818dded6-8e19-487f-a667-116f83dc7bc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig"}}
	{"specversion":"1.0","id":"91d348b7-f018-4c28-b2f6-be5787f92bef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"c53311e6-e51c-4b7c-821e-815c663bde18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0fd26967-b464-43f4-b037-729c03987d71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube"}}
	{"specversion":"1.0","id":"0ff5d066-48a8-4357-9cb5-4cb97ac6557d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"98e53b83-b778-4fb4-a9b8-c03429b19669","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-739000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-739000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMinikubeProfile (61.24s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-360000 --driver=qemu2 
E0911 04:03:53.334489    1565 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17223-1124/.minikube/profiles/functional-740000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-360000 --driver=qemu2 : (28.058962292s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-367000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-367000 --driver=qemu2 : (32.427470125s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-360000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-367000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-367000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-367000
helpers_test.go:175: Cleaning up "first-360000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-360000
--- PASS: TestMinikubeProfile (61.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-657000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-657000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (94.014ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-657000] minikube v1.31.2 on Darwin 13.5.2 (arm64)
	  - MINIKUBE_LOCATION=17223
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17223-1124/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17223-1124/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-657000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-657000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (42.248791ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-657000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-657000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-657000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-657000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (42.416375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-657000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-327000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-327000 -n old-k8s-version-327000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-327000 -n old-k8s-version-327000: exit status 7 (29.3595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-327000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-616000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-616000 -n no-preload-616000: exit status 7 (28.480291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-616000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-476000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-476000 -n embed-certs-476000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-476000 -n embed-certs-476000: exit status 7 (28.271625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-476000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-405000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-405000 -n default-k8s-diff-port-405000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-405000 -n default-k8s-diff-port-405000: exit status 7 (28.595875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-405000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-846000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-846000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-846000 -n newest-cni-846000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-846000 -n newest-cni-846000: exit status 7 (28.376958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-846000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/244)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (12.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-740000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1974937509/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (66.683875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (66.615792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (66.311375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (66.707ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (65.98775ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (66.640666ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (64.856875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-740000 ssh "sudo umount -f /mount-9p": exit status 1 (67.286958ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-740000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-740000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1974937509/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (12.08s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-740000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4008371176/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-740000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4008371176/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-740000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4008371176/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T" /mount1: exit status 1 (83.999542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T" /mount2: exit status 1 (65.3415ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T" /mount2: exit status 1 (66.206709ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T" /mount2: exit status 1 (64.061375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T" /mount2: exit status 1 (75.55575ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T" /mount2: exit status 1 (68.973375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T" /mount2: exit status 1 (65.022666ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
2023/09/11 03:58:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-740000 ssh "findmnt -T" /mount2: exit status 1 (137.064417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-740000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4008371176/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-740000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4008371176/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-740000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4008371176/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (15.06s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-687000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-687000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-687000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-687000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-687000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-687000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-687000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-687000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-687000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-687000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-687000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-687000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-687000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-687000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-687000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-687000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-687000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-687000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-687000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-687000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-687000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-687000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-687000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-687000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-687000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-687000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-687000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-687000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-687000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-687000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-687000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-687000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-687000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-687000"

                                                
                                                
----------------------- debugLogs end: cilium-687000 [took: 2.1148755s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-687000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-687000
--- SKIP: TestNetworkPlugins/group/cilium (2.34s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-294000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-294000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard